EXP: Linking Eye Movements with Visual Attention to Enhance Cyberlearning

8/15/16-7/31/18

PIs: Daniel Levin, Adriane Seiffert, Gautam Biswas
Vanderbilt University
Award Details

Although hardware and software solutions are rapidly advancing the ability to detect and track cyberlearners’ eye movements, the scientific understanding of the link between these eye movements and actual learning remains tentative. This issue is particularly important because research demonstrates surprising limits to the visual information that people take in: Even when it can be demonstrated that they have looked at something, this is no guarantee that learners gain knowledge of what they have seen. This project will address this problem in two ways. First, the researchers will develop a cognitive theory that can help specify how eye movements reveal what cyberlearners have absorbed when they view and interact with technology-based learning systems. Second, the researchers will develop a novel software application that helps cyberlearning content creators to incorporate assessment of eye movements into their practice. These projects will converge not only to develop cognitive theory that can help cyberlearners achieve more effective interactions, but also to enrich cognitive theory with input from real-world cyberlearning practitioners who struggle every day with the need to understand the sometimes confounding link between showing a learner something and learners’ actual ability to understand and remember what they have seen.

In particular, the investigators hypothesize that the link between fixation patterns and learning is mediated by visual modes that vary the relationship between concrete coding of visual properties and abstract focus on causal relationships and the goals of actions. The project will include experiments in which learners have their eyes tracked while they view a screen-captured information technology lesson. Some learners will be induced to deploy an “encoding” mode in which they focus on the specific sequence of steps needed to complete the task, while other learners will view the same materials using a “causal” mode in which they focus on the concepts underlying the lesson. Initial research has demonstrated significant differences in fixation patterns in these tasks (the strongest of these is that learners follow the instructor’s mouse movements more closely in the encoding mode), and the current project will test whether these modes are associated with different patterns of visual and conceptual learning. The project will leverage these results by incorporating mode-revealing analytics into a novel software application that allows content creators to record screen-capture videos of their lessons while recording their own eye movements. In addition, a panel of viewers will be equipped with their own eye trackers and will view the content creators’ lessons. Viewer eye movements will be returned to content creators who will be able view fixation patterns in the application, along with analytics based on findings from the visual mode experiments. The prototype system will be integrated with an existing learning technology, courseware for computer science education titled “Betty’s Brain,” and deployed in both formal and informal learning environments, including the Nashville Adventure Science Center.

Tags: ,