Roger Azevedo

Meet Roger Azevedo

Back to Perspectives

CIRCL perspectives offer a window into the different worlds of various stakeholders in the cyberlearning community — what drives their work, what they need to be successful, and what they think the community should be doing. Share your perspective.

Roger Azevedo

Roger Azevedo is a professor in the Department of Learning Sciences & Educational Research at the University of Central Florida. He is the Lead Scientist for UCF’s Learning Sciences Faculty Cluster Initiative. Roger examines the role of cognitive, metacognitive, affective and motivational self-regulatory processes during learning with computer-based learning environments.

See also the CIRCL Spotlight on Roger’s work.

How did you get started in cyberlearning?

It started during my transition from being an undergrad where I did a lot of rat research in neuropsychopharmacology, including implantation of electrodes. I got tired of the rats and wanted to focus on human learning. I switched to doing my masters in education technology at Concordia University, working with physicians and radiologists, and I was just amazed that we were still using multiple choice questions to measure medical reasoning and problem solving. At that time, hypermedia and hypertext were getting big. That was the impetus for me to jump into the area of cognitive science and pursue a PhD at McGill University, because I wanted to understand problem solving processes like reasoning and decision making and how we measure those processes — especially at that time in medicine, given that they’re dealing with people’s lives and diagnoses. Multiple choice just didn’t seem like a good way to measure medical reasoning. I don’t want to go to a doctor who got a diploma based on multiple choice prowess.

That led to the last couple of decades of my work, which is focusing on STEM and biomedical domains and the collection of multimodal, multichannel data that focuses on cognitive, metacognitive, affective and motivational processes. Can we instrument the learner to collect those processes in real time, regardless of the task, context, or technology they’re using, and then can we model those processes? Can we get a machine to model those processes and be more affective-sensitive, and more intelligent, if you will? So we get into issues of what to model, when to model, how to model it, and why are we modeling it.

To tie it all together: In my PhD work I looked at medical reasoning and novice expert differences in mammography interpretation, and developing intelligent tutoring systems for well-structured domains. Then, realizing that most of the tasks we deal with in such domains are ill structured, I transitioned to focus on more ill-structured tasks, simple domains to more complex domains, from exclusively cognitive processes to metacognition and emotions. I’ve also come full circle in terms of focusing on more individual problem solving to more collaborative problem solving, and now the question is more of a modeling one where it’s not just modeling in the human but can we also have an external AI system or a combination that also models what we do and lets us learn from each other.

Where do you want to take your work?

Thinking about the future, the short talk I gave during the Cyberlearning 2017 Shark Tank is really where we want to go: We’re collecting all this multimodal data on humans, and we have to train these data scientists — are we preparing our undergrads to use this data? The old days of sitting in front of SPSS and running an ANCOVA on pretest and posttest data is still important but not enough. What information and inferences are they making from this multimodal, multichannel data? Knowing that we are biologically limited as humans, the question is can we bring in robotics and virtual humans to learn how data scientists make inferences about this multimodal data.

So now we have a heavy emphasis on AI, machine learning, and deep learning. I’d love to see a virtual human or a robot be instrumented and connected to an instrumented data scientist and get to a point where they are able to meta-reason — that is, turn to the data scientist and say something like, “Listen, there is a lot of physiological data that was observed from this human learner, but you’re not paying attention to it. Why not?” Can we have a productive collaboration between between virtual and human data scientists?

What are you struggling with now?

One struggle is dealing with new generation of graduate students — getting them to get off facebook and dedicate long periods of time in deep thinking and critical thinking. I find that I have to spend more time teaching them to be systematic and disciplined thinkers about what they’re doing, to develop a work ethic and seriousness. My rule of thumb has been that if you’re a new graduate student, you have a grace period of a year or semester with me, but after that if you’re not serious, you’re out. If I’m going to invest in you and you want to be a scientist, then there is certain criteria. I’m not sure how to deal with that, and it’s a real struggle.

Another struggle is a research method, analytical struggle. Those of us who collect rich data — whether it’s eyetracking, physiological, or log file data — struggle with not having tools that would facilitate our work as researchers in analyzing the multimodal data. It would be really great if we could focus on creating some new tools that would help us make choices, make inferences, and otherwise facilitate our analysis.

Multimodal data can also lead to publication issues. For example, my graduate student just submitted to a top journal a paper that focused on eye tracking and physiology data, and we had to mention that we collected other data, too. So we get this response that the paper was almost rejected because we didn’t include the other channel data. You don’t want to have piecemeal publications, but we can’t shove all of this multichannel data into one article. Academic journals, editors, and publishers will need to address the issue of complex publications that are interdisciplinary in nature and that have additional criteria.

There is also an issue around sharing data with colleagues. We share data with many collaborators across the U.S. and in other countries, including Canada and Europe. We are collecting enough data for my students and postdocs to analyze and be first author on paper. How do we establish an equitable sharing model to maintain a collaboration with colleagues? Different disciplines have different expectations, and we are working to reveal what those are so we have more lubricated machine. Different faculty, across disciplines, also have different mentoring models that sometimes interfere with productive collaboration and co-authoring of scholarly products. Also, interdisciplinary research is not always valued by department chairs, deans, etc. and this has implications for promotion, advancement, merit, etc.

Finally, how do we help traditional journals, like the Journal of Educational Psychology, see that doing data mining and machine learning is okay? Not every article has to be a structure equation model with thousands of students or self reports measures. What can we learn from different types of studies? That also applies to grant applications. We’re going to be doing a lot of data mining and machine learning. How much of the 15 pages should we allocate to research plan knowing that exploratory research is going to be a big component of this work? Traditionalists will want hypothesis-driven research questions, however, given the complexity of human learning, there should be some leeway in accepting exploratory work (e.g., using data mining) as well since it can be useful in further delineating underlying processes, etc. and lead to hypothesis generation that can empirically tested.

What kind of infrastructure might help address these issues or accelerate your work?

From a technical perspective, being human, the idea is often that I’m going to develop my own tool. We need to figure out better ways to share tools so we aren’t reinventing the wheel. That slows down progress. I’ve been talking to Gautam Biswas a lot about this. Both he and I have worked with CMU folks like John Stamper and the Pittsburgh Science of Learning Center (PSLC) DataShop with some of our data (such as Betty’s Brain and MetaTutor data). Datashop is a great tool, but the data that we collect create does not really fit into the way they have set up their architecture. Here is a great tool that CMU has developed over 10 years, and there is no more funding for that. What does that mean for the rest of us who don’t look at knowledge components or use an ITS with a well-structured domain? How do we find, create, and share resources, including analytical tools or tools for data analysis? That’s a big issue for us.

I’ve invested a lot of funding and research effort in iMotions tools, and they have been very responsive. We used their facial expressions analysis engine (a module called FACET) to analyze facial expression data, but all of a sudden that software has been bought by Apple. Apple is sitting on it, they’re not going to do anything with it, so now we have to buy another piece of software. As researchers, it seems like we’re always dependent on some company, hoping they don’t go bankrupt or stop production. This puts a great burden, especially a financial burden, on projects like ours who expect agencies like NSF to fund this work. Or other devices: We invested a lot in Eye Tribe for eye tracking because they were cheaper, so we can bring them into schools and instrument a class of 20-30 kids, but all of a sudden they are no longer producing. We don’t want to have to track eye movement in schools by pulling kids out one at a time or bringing them to our lab.

How do you work with schools?

That’s one of our biggest challenges. I’ve been in Pittsburgh, Memphis, Montreal, and College Park. It was never that much of a challenge to get into schools, especially middle schools and high schools. Being here in Raleigh, it has been such a challenge. Usually we can get in with a couple of teachers, and then there is a drop off. I know teachers are inundated and have a lot of responsibilities, but it’s been hard to get our work into the schools. We have a few breadcrumbs now into private schools, but from my perspective, especially studying self regulation, I really don’t want to study the kids who already know how to self regulate. They’re going to be the elite students who go to the top schools. I want to focus on the kid who is struggling to read or struggling to understand science, and really help him or her.

Can you say more about how you want to help students?

We would love to focus on training students to engage in emotion regulation. Being trained as a cognitive scientist, we typically focus on cognition rather than emotions and motivation. Our research, particularly over the past 10 years, has come around to understand that if I can’t regulate my emotions, it’s going to impact my memory, my learning, and my performance. Can we collect data through physiology, facial expression, or log files, and come up with a model so that we can have agents we embed in the software that can detect emotions and then help train students to regulate their emotions? If you’re not able to regulate confusion, can I get you to acknowledge that you’re confused about a particular diagram or simulation — and tell me why? Could a virtual human model that, and allow students to practice emotion regulation strategies? A lot of our recent work is drawn from James Gross at Stanford. He has an emotion regulation model that has been used mostly for clinical and social applications. We’re trying to bring it into the learning context.

Another thing is motivation. We talk to colleagues and they say their kids aren’t motivated – what do you mean by that? How can we get kids to be more interested in and value the task that they’re doing? We talked to an eighth grade student who didn’t want to do a simulation about nitrates and phosphates. We asked him, what do you want to do when you grow up? “Oh, I want to be a physician.” You’ll have to learn about nitrates and phosphates. They’re not making that link. We’re also having trouble getting college kids to make inferences. One of our systems prompts them to make an inference, and some don’t know what that is.

How do we model these sophisticated learning strategies for children? We see these kids texting and facebooking, and I worry that they are losing those social skills and the ability to regulate. They can give up too easily. Sometimes you’re going to have to dedicate long periods of time thinking hard about something and struggling. We do a disservice to kids if we say you’re not going to have to struggle, we’ll make learning easy. Sometimes learning is hard, and that’s okay. I think that’s something that current technologies don’t really model for kids. It’s good to bring in the socio-emotional aspect that what you’re about to be doing is really challenging, but I’m going to be here with you as a learning companion.

And sometimes the students just want to be entertained. Some researchers answer that by trying to create serious games for learning. But as researchers, we don’t have the resources. How do we even compete with games by UBISOFT and EPIC? We tried creating a game with an IES grant a couple of years ago, and the kids didn’t like how it looked. We don’t have the budget to create the beautiful graphics. EPIC is just down the street from us. But the business model for these companies is income, not research. How do we connect with gaming companies to create an effective game for learning and research? With my colleague James Lester, we have been thinking about creating a serious game that teaches self-regulatory skills. So instead of getting points for shooting something, you get more points because you decided to use a sophisticated learning, cognitive strategy. So we’re rethinking the game paradigm and reward system — for example, here’s a piece of text on science, and if you detect the misconception, then you get rewarded for that.

But generally, the technologies that we’re seeing are so advanced, and they’re being released so quickly, when we compare advances in technology with advances in the education and learning sciences, we’re way behind.

What makes you wake up every morning and want to work on this?

As I get older, it feels more important to give back to the community. I worry about the disparity issue, and it’s probably going to get worse. I learned from a teacher after one classroom study that two of the kids in the classroom were homeless, living in a car. We were fortunate to have them even come to class. I can’t imagine what it would be like to be homeless and still come to school. How does a child deal with that? And how can I infuse my love of learning in these children?

Some of it is also baggage. I was raised in a blue collar family, born in Africa. When I was 8 years old we had to leave because of civil war. It was either you leave or we kill your family. So it was a kind of easy decision, if you have the means. Then as immigrants in Montreal, as neither anglophones or francophones, we went to poor schools. There were no expectations of going to a university. The opportunity of going to McGill University for a PhD was mind blowing to me. Not to mention being funded by the Canadian federal Government to do my postdoc training in cognitive psychology at Carnegie Mellon University! It was definitely not in the plans of my parents. Going against the grain was a struggle. I empathize with that.