Roger Azevedo
The evolution of learning technologies has led to the use of intelligent virtual humans (IVHs) to facilitate problem solving, reasoning, and learning across domains. Despite results in rapport building, cultural training, and informal science learning, IVHs have not been used to support learners cognitive, affective, and metacognitive (CAM) self-regulation during multimedia learning. The objective of this poster will be to share data regarding an investigation as to how an IVH, capable of facially expressing emotions (e.g., confusion, joy, neutral), can trigger learners ability to monitor and regulate their CAM self-regulatory processes and therefore enhance their science learning with multimedia (e.g., text and diagrams). Evidence from multichannel data (e.g., log files, eye tracking, facial expressions of emotion, physiological data, and screen capture of learner-system interactions) analyses indicates that the IVH’s facial expressions triggered monitoring of students CAM SRL processes during complex science learning with multimedia.