PIs: Justine Cassell, Louis-Philippe Morency, Amy Ogan
This project seeks to understand, and capitalize on, how teachers or tutors build rapport with learners by building technologies that support rapport. Research will study what rapport with learners looks like, when students deploy rapport techniques, when and how deploying rapport techniques (whether by people or automated agents) increases learning, and how rapport evolves over time. The project will build software that can help measure rapport between learners and with computers.
The project will begin by building a multimodal sensing rapport-detection system, based on recent advances in computer vision, signal processing, and machine learning which will automatically recognize audio and visual behaviors during learner interaction with an intelligent tutoring system. Human-human tutoring interactions will be used to guide development of the rapport detection system. Both short term and longitudinal analyses will be conducted using students working with an AI-based math tutor, focused on their visual behaviors (head gaze estimation will be used to measure facial action units and gestures like head nods or shakes, and mutual gaze between humans), verbal behaviors (using CoreNLP and other software to detect verbal utterances that represent rapport-related social constructs such as politeness, friendship, etc.), and entrainment behaviors (synchrony or asynchrony, divergence and convergence). The project will then design RAPT, the Rapport-Aligned Peer Tutor, which encompasses both the rapport detection system and an intelligent pedagogical agent that accounts for the persistent social states of rapport and non-rapport. Mockups/simulations of the interface will be used to test the designs before the full pedagogical agent is built. Trials will be conducted in 9-11 grade classrooms working with an intelligent geometry tutor using a two-iteration design-based research study.