CIRCL perspectives offer a window into the different worlds of various stakeholders in the cyberlearning community — what drives their work, what they need to be successful, and what they think the community should be doing. Share your perspective.
Mike Sharples is Professor of Educational Technology in the Institute of Educational Technology at the The Open University. He studies human-centered design of new technologies and environments for learning.
How did you get into the field of cyberlearning?
When I was doing my undergraduate work for my computer science degree, for my final year project I worked with a group of students designing a programming language for kids. I got very interested in how children learn through programming and how you could use computational methods to understand language and problem solving. I also wrote a program to try to teach French. I was fascinated by the differences between computer languages and human languages, and how you could use software not only to be a teacher, but to help people to learn.
For graduate school, I joined the AI department in Edinburgh just as it was very engaged with Logo, and Seymour Papert was doing his work in the U.S. In the U.K., they were taking a different approach of trying to provide guided support and advice for children aged 11-15 through the software as they were learning Logo. My PhD work was around how children develop creative writing and ways to help them explore language. My first job was with The Open University working on a piece of software called Cyclops, which was somewhat like Skype, but in the early 1980s when it was pretty cutting edge stuff. It let people communicate at a distance by writing and shared graphics.
So I got involved in e-learning through the Open University and the AI Department at Edinburgh. But underpinning much of this was not the idea of the computer as a teacher, but designing an environment where people could learn together.
What is your main project now?
I have two main projects. One, called nQuire, is around inquiry science learning. It brings together citizen science and inquiry learning, where citizens are not just doing science on behalf of scientists, but initiating their own science projects, managing them, and recruiting teams. It’s kind of a kickstarter approach, but instead of getting money you get support and expertise. We have a web site and environment called nQuire-It as a testbed for this type of community-led, crowd-sourced inquiry.
The other project is around a MOOC platform called FutureLearn. I’m Academic Lead, to provide advice and guidance on designing a platform that is based on good pedagogy. What we are particularly interested in is what kinds of teaching and learning actually get better with scale. Some kinds of teaching and learning get worse with scale, like sports coaching. Some sorts of teaching like lecturing are pretty much the same whether it’s 200 or 2,000 or 20,000 people. But what kinds of teaching and learning can improve at massive scale? We based FutureLearn around a pedagogy that came from Gordon Pask and Diana Laurillard. Their work hasn’t been particularly influential in the U.S., but it has been in the U.K., and to some extent in Europe. It’s a different approach based on human cybernetics and the idea that all learning is conversation: we converse with other people to reach understanding, and with ourselves to explore the world and make distinctions. So if you learn from conversing, the more people who are engaged, the more diverse views and the richer the learning experience. We built the platform around that pedagogy with the hope that the more people that came, the better it would get. The biggest course we’ve had was with more than 270,000 people. Participants watched the lectures and read the text, but also about a third of them engaged in discussions, and many more read the comments. So it really was a social learning experience. Then we brought in social networking techniques to manage that massive scale experience.
What kinds of things can you teach with this kind of discussion-based pedagogy?
It’s particularly good for subjects where there are experiences to share, differences of opinion, and issues that you want to discuss. The 270,000 one was a course put on by the British Council for language learners, preparing to take the IELTS test. People brought their own experience of learning English as a language, experiences to share, and also worries about taking the test. In contrast, we had a course to learn about the Higgs Boson from Edinburgh University. It was less suited to that kind of course, where you have to learn facts and concepts.
Can you say more about how FutureLearn compares to other MOOC platforms?
The main difference is around the social learning and the pedagogy. Other main MOOC platforms were based on an instructivist pedagogy and personalized learning—the idea that each learner has a personalized path. We were the first to develop a platform around social learning. There is a conversation associated with every piece of teaching material. You don’t go up to a separate forum. Associated with every piece of content––every article, video, or piece of audio––is a space to allow learners to have a conversation in context. Also, when learners review others’ assignments, and that becomes a conversation. The conversations are water-cooler type discourse so it’s very simple to view and participate, following the idea of Etienne Wenger of Legitimate Peripheral Participation. When we built the platform we weren’t sure that people would have a conversation around every piece of content, but it’s been just the opposite: the conversations have gotten too big. In the 270,000 person course, just one video had 56,000 comments! The problem then is how do you manage that massive scale of information. We let learners know that they don’t have to read every comment. That’s why we’re using social network techniques like most-liked comments and being able to follow other learners and educators and read their comments. So some of the most interesting and useful commentary rises to the top.
We also require that people use real names, not pseudonyms, and encourage them to create profiles. And we changed the structure of courses: it used to be the first few steps were an introduction to the course, to the educators, to how to use the platform, and the coming weeks material. By the time you got through all that, you lose half the learners; you can see them fall off the cliff. The more successful courses start with learners bringing in and sharing their own experiences–such as their experience learning English–and get other learners to comment on those experiences, before you go on to the core teaching material. So you apprentice the students onto the platform and show them it’s a social platform, and that their views are valued and easy to contribute.
As a result, a lot more people engage socially. In other MOOC platforms, around 10-12% of the students post comments. In FutureLearn, it’s around 35%. The completion rates are higher as well; they are about twice what the other platforms are. One participant described other MOOCs as being like 2 dimensions, but when you adding the social component, it becomes like 3 dimensions. It kind of makes sense; social network environments have been really successful, so why not apply them at massive scale learning?
What’s next for FutureLearn?
We now have over 2 million people registered for FutureLearn, so the focus of the software team has been to produce a software system that is robust and works at massive scale. Over the past year or so we’ve tended to do incremental development rather than major changes, but we’re just about to change track to have some more major innovations on the platform. We have lots of great ideas to improve the platform. For example, giving educators dashboards to be able to bump up valuable contributions to make them more visible, and to use reputation management system so that the status of learners who have made valuable contributions in the past increases so their comments become more visible. We have another platform at OU that uses such reputation, iSpot, which is a community platform for nature observation. So we’re trying to bring those ideas to FutureLearn.
What research are you doing with the platform?
We’re testing out some approaches using massive-scale A/B testing, the sort of things that learning scientists and educational technologists have wanted to do for years. Many of the tests we’ve done so far have been user experience tests and looking at different ways to promote the courses. But going forward we can test out learning designs to see which approaches work. For example, we can tell how many learners look at a step, and also how many they mark as complete, from their perspective. So we can get some rich analytics on such things, and use that to optimize the learning design even in the first week of a course. We have a number of key performance indicators (KPIs), such as how many people start a course, mark at least one step complete, who finish the first week, who participate socially, and who complete the course—where that’s marking at least half the steps as complete and doing the assessments. We use an R program that generates a report for the course developers to track such indicators over the duration of a course. The report helps us identify weak points where a high percentage of people get the wrong answer on a quiz question, or where people drop out, for example. One thing we looked at recently was the length of a video: we plotted all of the video lengths, which range from 10 seconds up to 30 minutes, against the percentage of people who not only quit the video but also quit the entire platform. It was pretty much linear to start, and then after 6 minutes, the number of people quitting the platform shot up.
Some of these things are described in a paper by Rebecca Ferguson and myself on massive scale learning. I have a PhD student who is looking at self-directed learning on FutureLearn courses and how experienced learners manage their experience. But there are about 20-30 papers now on FutureLearn courses. Katy Jordon, a PhD student at OU, has a superb website that has a comparison across MOOC platforms and a literature base of 100’s of papers. It’s a great starting point for research on MOOCs.
I spent 30 years designing online learning platforms, and at best you might get a couple of classes of 30 kids to test out your software. Or when we worked at museums we were really pleased at 3000 participants over a year. Now we have 2 million. Doing it at scale is just an amazing opportunity. And building a large-scale platform that is based on a theory of teaching and learning, that’s really exciting.
Anything you’d like to add about your work with nQuire?
My focus has always been learning outside of the classroom, seamlessly across contexts for learners who are mobile. With citizen inquiry, the question is: How can you get groups of people to initiate interesting inquiry-led projects, and orchestrate those online with the support of experts? We’ve had mixed success. When it’s well facilitated, it works. For example, I had a PhD student working with both expert meteorologists and amateur weather enthusiasts on our nQuire-it site around inquiry learning for weather. Participants engaged in activities like cloud spotting, which involves identifying different types of clouds and formations, including unusual ones, and exploring the relationship between air pressure and rainfall using an Android phone app. But when the PhD student stopped facilitating, activity dropped away. You need a critical mass of people. Our iSpot platform is still working with 40-50 thousand people registered, but nQuire-it only has a few hundred. Not yet a critical mass.
I’m keen to better understand how to support the process of citizen engagement in science, and how to move up to a massive scale site that will then be self sustaining. More mentors and facilitators, more funding, additional social functionality like reputation management and rewards––all of these might help. And there are interesting issues around apprenticing into a community. We’ve been looking at communities like Slashdot and StackExchange, for example, but these people are very immersed in their field. We had hoped to find citizens similarly immersed in an issue. So it’s not enough to have a really good pedagogy; you need ways to grow a sustainable community.
What would you like to see the cyberlearning community doing more of?
The main things are pedagogy-informed design and innovation, and scale and sustainability. At least in Europe, there has been recent emphasis on scale and sustainability. There have been sort of three phases of technology-enhanced learning: The first phase was: can we do it––like the early work on Logo and intelligent tutoring systems. Can we get these systems to work? Can we model Socratic dialogs on a computer system? The second phase was: can we get them to work with real learners? That was during the 1980s, 1990s, for example, virtual learning environments. Now it’s: can we make the pedagogy more relevant and innovative, and make them large scale and sustainable? Virtual learning environments may work, but they’re based on a particular kind of instructional design approach, and typically don’t bring in inquiry-learning capabilities, case-based learning, embodied learning, game-based learning, and so on.
Trying more innovative pedagogies, and reaching scale and sustainability – I think that’s the real challenge at the moment. We know we can deliver text and video to millions of people. How do we enable good learning conversations at scale? Learning as conversation is learning through social interactions. It seems like there is a huge opportunity to bring what we know about social networks and techniques to manage those conversations into learning platforms.