Multimodal Learning Analytics

roundtableLocation: Alcott Boardroom
This is a roundtable in the Cyberlearning 2016 Roundtable session.

Speech-Based Learning Analytics for Collaboration

Cynthia D’Angelo
This project involves collecting students’ speech while they work together on computer-supported collaborative tasks. We will discuss how to integrate multiple forms of student input (speech, online logs, etc.) into coherent analyses.
Project: Homepage, NSF Award #1432606 – Speech-Based Learning Analytics for Collaboration

Unlocking the potential of spoken language technologies for education research

Chad Dorsey
The sophistication of technologies for processing and understanding spoken language have improved radically in recent years, but educational research has barely begun to tap their potential. Come learn about this important new field.
Project: Homepage, NSF Award #1550800 – CAP: Building Partnerships for Education and Speech Research

Multimodal Learning Analytics

Marcelo Worsley
This round table discussion will include an introduction to different techniques and tools that can be used for doing multimodal learning analytics. We will discuss ways for extracting and analyzing speech, gesture, emotions and more.
Project: Homepage, NSF Award #1548254 – BIGDATA: EAGER: Catalyzing Research in Multimodal Learning Analytics