CIRCL perspectives offer a window into the different worlds of various stakeholders in the cyberlearning community — what drives their work, what they need to be successful, and what they think the community should be doing. Share your perspective.
James C. Lester is Distinguished University Professor of Computer Science and Director of the Center for Educational Informatics at North Carolina State University. He is a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI). His research on AI-driven learning technologies ranges from intelligent game-based learning environments and multimodal learning analytics to affective computing, computational models of narrative, and natural language tutorial dialogue. The adaptive learning environments he and his colleagues develop have been used by thousands of students in K-12 classrooms throughout the US and internationally.
As a computer scientist, how did you get involved with adaptive and personalized learning?
Great question. I’ve been interested for as long as I can remember in learning technologies. I did my dissertation in Computational Linguistics, and the focus there was on generating explanations for students. And it was an interesting experience because we did not have an intelligent tutoring system, and we did not have an adaptive learning environment. We had no users. So, essentially every key ingredient that you would like to see in an adaptive learning intervention was missing. But I always had the idea that it would be really the most interesting thing in the world to build a system that could support adaptive learning.
So, that work was my dissertation. That was 1994 at the University of Texas at Austin. I became a faculty member at North Carolina State University a few months later and had the very good fortune of connecting with colleagues, many of whom I still collaborate with in education. So, we have had numerous collaborations–obviously, not just at NC State–but around the country, with folks in STEM education and the learning sciences and in psychology (especially, but not limited to ed psych). We pursue every project in a collaborative fashion. So, in many ways, I feel at least as connected to the learning sciences as I do to computer science, although technically, I am a computer scientist by training, and I’m a faculty member in a department of computer science.
It seems that these days people know what adaptive learning is. But in 1994, to what extent did schools know about adaptive learning and was there an appreciation that it was going to continue to grow?
So, in the wider public, there absolutely was not. There was a complete lack of recognition of the potential of these kinds of technologies. That was certainly true in computer science. But actually in education at that time, there was a great hunger for technologies that could support learning in new and interesting ways. And certainly, they didn’t think about them in the way that we think about them now, but there was absolutely a recognition that what was going on in the classroom could benefit considerably from technologies … especially in science where there’s the opportunity to deeply interact with science phenomenon. So, we did our earliest game-based learning environment, Design-a-Plant, starting in 1994. And it really grew out of that.
Can you tell us about your recent NSF award and your current research agenda around that award?
We actually have a couple of recent awards that we’re super excited about. One of them is on the formal side and one of them is on the informal side. So, on the formal side, we’re working on a collaborative project with Dr. Krista Glazewski, Dr. Tom Brush, and Dr. Cindy Hmelo-Silver, all three who are at Indiana University. And they bring expertise, really deep expertise, in pedagogy, and teacher education, and computer-supported collaborative learning. We actually have a previous and ongoing collaboration with them where we’re looking at collaborative game-based learning environments, which is itself a lot of fun.
But this new project is funded by The Future Work of the Human Technology Frontier Program, and it’s part of NSF’s Big 10 ideas. And we’re really interested in thinking about what it means to provide technologies that can support teachers as workers. So, the Future Work program is really thinking about how we can support workers in the future. And it might be blue collar jobs, but it could also be white collar jobs. And the job, of course, that we’re most excited about is that of the teacher. So, we spent a good bit of time before the project was funded thinking about what might it mean to introduce cognitive assistants for teachers in the classroom. The entire team grew really excited about this idea of intelligent cognitive assistants for teachers.
The new project is called the Intelligent Augmented Cognition for Teaching framework (I-ACT), and it’s looking at an augmented cognition framework in three different phases. We’re looking prospectively at supporting teachers in kind of the lesson planning phase of their lives. So, what does it mean to provide advice to teachers about planning the next day’s lesson or next week’s lessons? Then, we’re looking at the real-time side, the kind of concurrent side of support. So, what does it mean to provide real time support to teachers in the classroom? And I should say, in large part because Cindy is involved with this, we’re again looking at collaborative learning in the classrooms, and we’re looking at cognitive assistants that can support teachers supporting kids learning collaboratively. So, this is a really, really interesting setting.
The way that we’re studying this is taking the most rich instrumentation of a classroom that you can imagine and bringing all the multimodal data streams which that conjures up, and then using that to machine-learn models of support. And this project is just getting off the ground. So, we’re looking forward to see how that plays out in the classroom. Finally, the third phase is rooted in teachers’ reflections. How can we provide technologies that will retrospectively support teachers after the fact? What kind of interventions could have gone better? Is there a different ordering? Are there different orchestration techniques that they might have considered? So, we’re really interested in trying to understand what kinds of support will make teachers more effective, but we’re also equally interested in supporting what the Future Work program calls “quality of work life.” So I think of it as engagement in the profession of teaching. What can we do to make teachers not only better on the job, but also really focus on how to help them fully appreciate the joy teaching?
Are you targeting a specific experience level along your teachers? Are you targeting novice teachers or is it going to be a blend of new teachers, old teachers, somewhere in between?
Interestingly, we’ve focused the demographic scope on students rather than teachers. So, we’re working exclusively with middle school teachers. We considered pre-service teachers, but the project is fully focused on in-service teachers. And something that Krista and Tom are especially interested in is kind of the community of practice supported by this notion of “intelligent augmentation,” as the Future Work program calls it. But it will be really interesting to see how the greenest of green teachers are helped by these technologies, and whether there experience is fundamentally different than that of a seasoned teacher. How can cognitive assistants really promote community within schools and what does it mean to do that? Engaging teachers at different ends of the experience spectrum is super interesting.
You’re known for your work with artificial intelligence (AI) in terms of increasing student engagement and learning. Can you talk a little bit about your work in AI, where it’s come in the past, and where you may think it may be going, specifically around schooling?
Absolutely. First, let me quickly mention the project on the informal learning as it’s very much related to your question. On the informal side, the new project is looking at computational models of engagement. There’s been a very rich history of people that work in informal learning contexts, and I’m not one of these. I’m new to this. We’ve only had one project before on the informal side, but there’s a very strong interest in understanding visitor engagement at science centers and museums. So, this new project is looking at multimodal visitor analytics for museums. It is highly analogous to what we’re doing with the Intelligent Augmented Cognition for Teaching project in the classroom. But, whereas in the classroom we’re studying teachers and students, in the museum, we’re studying visitors. The kind of intervention itself is an interactive tabletop exhibit that we developed in a previous project for sustainability education.
So, think of it as a very, very large iPad that multiple users can interact with at the same time. It provides the raw materials for learning analytics, but then pairing that with video and audio. So, we’re trying to take all of these together and then machine learn models of engagement in order to examine how we can empirically understand what it means for museum visitors to be engaged. The deliverable at the end of the project is a deeper understanding of learning. So, what does it mean for a collaborative group of visitors to engage in an informal learning setting with science? We’re especially interested in seeing what the synergies may be between these two projects.
Is there a targeted age range with the informal learning and can you speak to the difficulty in controlling certain factors within the informal environment as compared to the more formal environment?
There is. It’s middle school aged kids, but they’re not middle school students, per se, because it’s not school but rather an informal learning context. So, I think of it as sort of early adolescence. And it’ll be really interesting because there’s much more freedom of choice in a museum model.
I think it’s interesting, and it connects to your AI question. I’ve been, interestingly enough, being asked to give many talks at very different kinds of places than I typically do, about AI. Gosh, out of all the work that we do, I can’t offhand think of any active projects that don’t have AI at their core. But just to sort of step back and answer your question, I’ll give you a quick example. I recently gave a talk at the I/ITSEC Conference, which many people in our field aren’t familiar with, but it’s the largest simulation-based training conference in the world. I was giving a talk on a panel entitled AI Run Amok (their name not mine).
I think there’s a lot of concern in the public discourse on AI. Actually, my sense is that we really are on the cusp of a new age. We’re seeing such dramatic increases. So, this is objectively a different time than it’s been in all of the other sort of days of AI. I actually do think it’s quite different. And I must also say that I believe, as well as I’m seriously concerned, about the wider workforce implications of AI going forward.
There’s a fairly wide range of projections of what percent of jobs are automatable. McKinsey Digital released a report in 2017 entitled “A Future That Works: Automation, Employment, and Productivity” that I thought was pretty conservative, which suggested that a surprisingly large number of jobs have numerous tasks that are automatable. And that doesn’t include just blue collar positions; there’s a lot of white collar positions in there. There’s even this term “labor market polarization” which economist and MIT professor David Autor, talks about. So, we’re seeing enormous opportunities for very highly skilled individuals that have technical skills and deep analytical expertise.
The future is very bright, and interestingly enough, the opportunity for growth exists for people at the other end of the spectrum, which typically includes a lot of the kind of jobs requiring manual dexterity or caregiving where there’s that kind of human element to it. Of all the projections, I’m most … comfortable is probably not the right word…but I’m most confident stating, is that 15 years from now we’re going to be living in a fundamentally different work environment.
And that’s both the opportunity and the problem. But at least for my career to date, I’ve entirely thought of AI as a tool. Not as a tool for replacing jobs, but as a tool for helping people learn or be trained.
But for me, the real core sort of capability of AI and the reason I got into this line of work in the first place is just this tremendous promise that AI holds for being able to support learning. We can think of that as supporting learning for kind of fundamental STEM knowledge, skills, abilities. But we can also think about it as learning about self regulation and metacognition. We have several projects with our colleague, Roger Azevedo, at the University of Central Florida looking at self-regulated learning and how AI technologies can support that. So, I am in this kind of funny position of both thinking about AI as something that’s going to be incredibly disruptive, while at the same time thinking about AI as a crucial tool that is part of the solution to how we educate our children and ourselves in the future.
You said in 15 years we’re going to see a fundamentally different world in regards to how people work. There now seems to be a growing expectation that throughout your life, you’re going to have to go through further trainings and educational programs, and I think AI will play a significant role in this regard. Can you speak to where you see this idea of lifelong learning going?
Absolutely. No, I don’t think it’s been overstated at all. I think it’s been significantly understated. And I’m kind of serious about that. So, thinking about this from a sort of labor market kind of perspective, we need to, as a society, do a couple of things. We need to make sure that the type of jobs available in our future industries are fillable. So, we have to train people to have the expertise and the capabilities to fill them. But from the people perspective,we also want to make sure that we have employable people. And these are both significant challenges. One metaphor that I have thought about considerably is what does it mean to have a personalized learning companion that follows you from when you’re really quite young all the way through your formative years until you’re a college student, graduate student, and then out onto the workforce? And to the extent that everything is going to be changing so very, very rapidly, it’s just impossible to envision a future where we’re not all constantly learning all the time. It’s just not conceivable.
And so that brings to the forefront this great, great focus on self-regulated learning and metacognition. It is remarkably important that we, as learners, and our children as learners, are not just the masters of a particular kind of problem-solving competency, but that they are really master learners. And what better technology to support this goal than AI? So, as I’m sure you’re aware, there has been considerable exploration, really in the past three to four years, on “teaming.” There is much, much more on the industry side than we see in academia. For example, developing teams of humans and machines that can collectively perform some task. It might be getting surgery, or pick your favorite complicated task that requires very complex abilities. You can imagine these tasks would best be served on the computational side and those that would best be served on the human side.
And of course, this will change over time. But to me, it’s a very interesting metaphor and one that is grounded in what we’re going to be experiencing more and more in the future. And I think one of the key meta-skills for the next 10 years is this skill of being able to work effectively with computational partners. And what does that mean? Well, I don’t think that we’re simply born to do this. It takes some real serious education/training. Not just the teaming itself, which to me is fascinating, but in learning and in cultivating people’s ability to learn to be good teammates.
What, in your estimation, is the biggest challenge in terms of K-12 education?
To me, there are a couple of issues that are most important. So one (and I’ve only in the past few years in collaboration with our colleagues at Indiana University begun to deeply appreciate this) is the question of teacher training. What’s the best way to prepare our teachers? And I certainly think that there’s an opportunity for at least some kind of technology amplification of those skills. Of course it’s not just the subject matter that’s most important but also the ability to very effectively orchestrate classroom activities. How do you train people to do that? So, in some ways, this is a kind of “micro-concern.” I mean, it’s a nationwide challenge, but it’s still micro.
The macro one of course–the elephant in the room–is how do you address this enormous systemic challenge? And I again think about this from the perspective of AI. Some countries, China being a great example, are really sort of moving headfirst into AI. And there are an enormous number of venture-backed education startups in China that, from the 50,000 foot view, are really ingeniously thinking about how to use personalized learning to the advantage of that country as a whole. And they insightfully recognize that there are certain things that are supportable with these new interventions.
So, for example being able to have “intelligent formative assessments,” which could just naturally be part of the learning experience and being able to develop these for teachers, but also recognizing that this is really speaking to the needs of the student. In some ways, it’s kind of like the Wild West of education. My concern for the US is we’re simply too conservative in this area. I’m worried that we’re not sort of willing, as a country, to be as experimental, particularly with the tech, as we ought to be. As a computer scientist, I am deeply committed to this idea that everyone needs to be able to think computationally. And that does not mean everyone needs to be a computer scientist; it means to be readily able to bring to bear computational skills to whatever your task at hand might be. And that might be as a microbiologist, it might be as a physician, it could be as an attorney. There are all kinds of jobs that deeply need this.
As we move into this age of the intelligent machine, what Kevin Kelly calls “cognification”, there’s just an incredible demand for all of our students to be able to do this. So, my colleagues Eric Weibe and Brad Mott at NC State and Kristy Boyer at the University of Florida, we’re looking at integrating computational thinking into the middle grades science experience. It’s a really fun project. We’re creating a game-based learning environment and having both in-game and out-of-game activities. It’s all in the classroom, around computational thinking in the life sciences for seventh to eighth grade kids.
There are all kinds of interesting challenges there, but the one that is kind of at the forefront and that really has to be solved soon is determining how do we get room in the curriculum… this is just a very practical question. How do we get room in the curriculum for computer science at the high school level and for computational thinking at the middle school and elementary levels? And for now, I think it’s either integrating into science or math. But figuring out how to do this in an artful way and being able to really scale this kind of learning experience is just a critical, critical problem. But if we can do it, it’s going to make STEM Ed such an exciting place to be. I think it’ll be really great.
What’s next for you in the next 15 years?
Well, first, I am very interested in what it means to take these technologies and make them work in real classrooms at scale. So, one of the major nutrients of today’s AI technologies is data, as you know. And when you really start operating at scale, you get a couple of things going that I think are really exciting. First, you’ve got this enormous data exhaust being generated, which you can then take and use to train incredibly, incredibly accurate and proficient models. So, I imagine training programs (even my own area, tutorial planners, pedagogical planners) that can make the most effective decisions about how to best support an individual learner while considering a wide array of characteristics about that learner.
But then, think about what it means to deploy these things at scale. And when I say “scale,” I don’t so much mean at the district level but at the national and international levels. And that poses all kinds of really serious practical challenges. But I think that’s what we’re really after. What we want to do is take everything that we know, not just about adaptive learning technologies and AI but also learning analytics, and deeply consider what it means to do that at a truly broad scale. That’s really attractive because it has this incredible positive feedback loop to it. So the more data, the more informed the algorithms can be, the better individuals learn, the more supportive they are, and the audience grows broader and broader. It is a virtuous cycle for sure.