Artificial Intelligence (AI), machine learning, and related technologies will have powerful impacts on the future of learning. We do not yet know all of the uses and applications of AI that will emerge; new innovations are appearing regularly and the most consequential applications of AI to education are not yet invented. Amidst the rapid expansion, we know there are both potential benefits and considerable risks. Although the greatest impacts are likely 5-10 years out, educational planning needs a long horizon to be effective.
To explain why these issues are now urgent, we begin with a metaphor: that technology is an amplifier (cite). Applying AI in education is not new. There is a history of intertwined research and development of AI and Education that goes back 50 years. For example, just as Marvin Minsky was exploring AI and the nature of Mind in the late 1960s and early 1970s (cite), his colleague Seymour Papert was inventing the educational programming language Logo (cite) — a language based on the AI language, LISP. Although this history is long, the amplifier for AI has only been at a low setting: most uses of AI in learning have been in research projects. Large-scale use of AI in mainstream education has been limited. This is about to change.
In the wider world of technology, AI has become a core part of our cell phone technology and home assistants, allowing us to talk to them and to use them as personal assistants. Machine learning, neural networks, and deep learning algorithms are ever-increasing in their prevalence in products to support image processing and speech recognition (Richter et al., 2019). In education, we see multiple converging factors that will push AI in educational technology to high amplification impacts within the next 5-10 years. For example, due in part to the recent pandemic, learning is rapidly shifting online. Big data to feed AI with information about learning is becoming more readily available. As industry creates and refines interfaces that support more naturalistic interactions between AI and learners they become more appealing to educators and learners. Costs are dropping, the pace of innovation is accelerating. The infrastructure to widely scale AI applications is readily available.
When we say amplify, what do we mean? By amplification, we mean that learning technology can take an aspect of a learning process and emphasize it, refine it, intensify it, and scale it widely. This can be good or bad; undesirable or desirable effects on learning can scale with equal ease. Unexpected consequences can occur. Efficiencies around less important learning practices can drive out less-efficient but more meaningful and relevant alternatives. There is a tendency to invest in what is possible and to underinvest in analysis of inequitable impacts and mitigation of risks . We need more knowledge, policies, and practices ready to mitigate any of the bad and re-double the focus on what’s good.
To spur conversation, we invited expert researchers to a facilitated, online meeting among expert researchers whose investigations focus on AI and the Future of Learning. We sought to address two questions over two days and seven hours of conversations:
- What will educational leaders need to know about AI in support of student learning in order to have a stronger voice in the future of learning, to plan for the future and to make informed decisions?
What do researchers need to tackle beyond the ordinary to generate the knowledge and information necessary for shaping AI in learning for the good? - In this report, we discuss how experts see the strengths and weaknesses of AI, as well as the opportunities and barriers. We share several scenarios for applying AI to learning that differ from the most common applications and may portend new applications of the future. And we discuss the recommendations of the experts regarding what research topics need more emphasis in the future.
Context, Participants & Process
The expert panel we report was part of our work as the Center for Innovative Research in Cyberlearning (CIRCL). Although CIRCL ended on September 30, 2020, related work continues in the newly funded Center for Integrative Research in Computer and Learning Sciences (CIRCLS). CIRCL hosted the convening in coordination with colleagues at Digital Promise who were working to support policy needs of the U.S. Department of Education and specifically with issues around artificial intelligence.
CIRCL is a NSF-funded project that serves as a community center for a cluster of independent NSF-funded projects in the Cyberlearning program. Each of over 400 projects looks 5-10 years into the future and applies concepts from computer science and the learning sciences to investigate future learning scenarios. Through CIRCL, we have seen an increasing number of projects that explore “ambitious mashups” (cite) of artificial intelligence capabilities with other resources, technologies, approaches, and capabilities. At a fall 2019 CIRCL convening of approximately 200 Cyberlearning researchers and investigators, the attendees indicated that challenging issues around ethics and equity of AI applications is a very important area for the field’s attention. We’ve seen our colleagues in other countries organize around some of the issues (cite LACE) and the issues around AI and learning with ethics and equity are recurrent at the recent conferences that cyberlearning investigators attend. Further, NSF recently awarded a first-of-its-kind $20 million center on issues of artificial intelligence in education (cite press release), with a sense that this investment is not the end but rather the beginning of a much more significant emphasis on these issues at the Foundation. Overall, we are experiencing surging awareness that responsible researchers need and want to start doing more to tackle issues relating to AI and Education, starting immediately.
The full and final report will be available after 11/1.