Robb Lindgren

Meet Robb Lindgren

Back to Perspectives

CIRCL perspectives offer a window into the different worlds of various stakeholders in the cyberlearning community — what drives their work, what they need to be successful, and what they think the community should be doing. Share your perspective.

Robb Lindgren

Robb Lindgren is an assistant professor in the College of Education at the University of Illinois at Urbana-Champaign.

See Robb’s 2017 Video Showcase Video: ELASTIC3S: Embodied Learning Augmented through Simulation Theaters for Interacting with Cross-Cutting Concepts in Science

Congratulations to Robb for receiving the 2017 Jan Hawkins Award!

How did you get started in cyberlearning?

I started in a Learning Sciences doctoral program at Stanford in 2003 working with Roy Pea and a project called DIVER, an online collaborative video analysis tool. DIVER was my first foray into learning technology design and thinking about the ways to exploit the affordances of new tech to enhance educational environments. At the time video was just starting to be shared online, and the interactions people had with video were very limited, mostly just hitting the play button. But advances in technology were suddenly allowing for much more interactivity, and the DIVER project tried to leverage those new capabilities to generate new opportunities for learning and building online communities. For example, in DIVER, you could add visual and textual annotations, essentially enabling an expert to draw attention to what’s important in the video event. DIVER made it so that complex and specialized video events could be unpackaged and transformed into something novices and other members of a community could learn from.

That idea of using technology to enhance events and artifacts, and to insert expert-like perspectives into experiences is really what’s driven my work ever since. At that time we were working with video; now I work with virtual environments and augmented reality, which are wonderful technologies grounded in the idea that we can immerse someone in a new perspective, and perhaps even a new identity. Roy and I have described a vision of what we call “inter-identity technologies” which describes this new genre of technology interactions that in some way merges your identity with the identity of others, perhaps others with more knowledge and skills than you have currently, as a way creating new learning and understanding.

What is unique about your work?

There’s increasing recognition that the body—how it moves and how it takes in information from the world—plays a significant role in how people think and learn. There’s a lot of great people who are working on research related to embodied learning and new technologies that can interface with natural physical interactions (touch, gesture, etc.). I think what makes my work unique is that I try to figure out ways to embed learners’ movements within simulations and visualizations, essentially making them part of the system they are trying to learn about. This gives students an “inside” perspective rather than simply observing or manipulating from the outside. For example, for my GRASP project we have created simulations of molecular interactions that allow students to make hand gestures to act out how they think molecules move to create air pressure or transfer heat. Instead of “hands on” learning, we aim to create opportunities for “hands in” learning.

If your project succeeds, how could learning be transformed?

Current learning environments are starting to adopt more sophisticated and interactive technologies, such as mixed or augmented realities. But there’s typically no strong pedagogy or learning theory behind the use and implementation of these technologies. The inclusion of new technologies are often driven by now novel or engaging they are perceived to be, as opposed to how effective they are at connecting the experience with the things we want people to learn. In my lab, we want our technology designs to allow for embodied interaction, so that students can tap into familiar movements and understandings as a means to create new connections and new insights. We want technologies that allow students to act things out and make predictions, make real-time data available for discussion and reflection, and develop analytics that allow instructors to productively intervene and model expert-like behaviors. My projects in particular aim to create a clear demonstration of interactions between body actions and learning outcomes, such that they can be a model for other project designs. We want to inspire new technology designs where children can move, engage, and play, but at the same time a place where we can expect substantive learning gains and more sophisticated levels of performance.

What are you thinking about now?

I’m thinking a lot about learning transfer, and whether or not designed embodied interactions can facilitate it. There’s been a few successes showing that physical interactions can support learning for fairly specific domains. My own work, for example, showed that using the movement of one’s body to make predictions about how objects in space will move is good for learning. But now I want to know whether the intuitions of our body can be leveraged for higher order learning that can be put to work when a student encounters a new domain. My Cyberlearning project (ELASTIC3S) is beginning to look at this question, examining whether or not gestures that are successfully used to interact with one topic (e.g., Earthquakes via the Richter scale) can facilitate learning of a second topic (e.g., acidity and basicity via the pH scale). In my view embodied learning will really show its value to Cyberlearning when it’s demonstrated to be robust, deep, and transferable.