CIRCL perspectives offer a window into the different worlds of various stakeholders in the cyberlearning community — what drives their work, what they need to be successful, and what they think the community should be doing. Share your perspective.
Stephanie Teasley is a research professor in the School of Information at the University of Michigan.
How does working in a School of Information relate to the Learning Sciences?
Although I am trained as a cognitive and developmental psychologist, I’ve worked most of my career in a School of Information. I love the inherently interdisciplinary nature of a School of Information — and we’re very into using multiple methods. Schools of Information have immense expertise in data science and big data, in informational visualization, in designing user interfaces, in information architecture and retrieval, in data policy and security, and in curating resources — but as Learning Scientists we bring in data that is not typically the focus in Schools of Information. We need to work together to interpret big data in terms of learning processes and in order to recommend tools and resources that could improve students’ lives.
What is the key to productive interdisciplinary work?
Complementary expertise and an ability to see past doctrines. There are so many stovepipes in academia. The early days of learning sciences were exciting because we didn’t have those borders, but as we’ve matured, we are starting to draw boundaries around how we define learning sciences research, around methodological approaches, and around contexts for learning… There is a majority of work in STEM K-12, for example. I am excited about opening up learning sciences work in higher education, to connecting out of school and in school learning, and to more research in informal contexts. I want to see a continued openness in our community, because that’s where we will see deep innovations.
Why is Learning Analytics important to the future of research on Learning?
When I first began studying processes of learning, we had to hand-code students’ discussions — that was very slow, laborious work, and we lost a lot of context in the process. Then we started videotaping students and simultaneously capturing logs from the software simulations they were using. Now we could integrate discourse with the context of the discourse, and there was real joy and beauty in process of analyzing learning — but it was still laborious, so we could only work with 10 or 20 students’ data at a time.
When I realized how much data was being captured by our Learning Management System at Michigan, I became excited because we could be studying tens of thousands of students, not just 10 or 20. But to do that, we have to realize that much of the data which is automatically collected isn’t that interesting — for example, the mechanics of turning in assignments. But there is enough in online discussions, wikis, and use of other rich learning tools that could help us better understand learning — and now we can look for patterns that might be important not just with 10 or 20 students, but potentially to 10,000 students — and not just for an hour or two, but over the course of years while they attend a university. So if we find meaningful indicators of learning processes and see how these patterns vary across students, we can potentially design learning experiences that benefit huge numbers of students.
If I’m a University President, a voter, or a congressperson, why should I care about Learning Analytics?
While issues of data privacy are very much on people’s minds today, it’s important to recognize that using big data doesn’t mean becoming Big Brother- it can become a means for understanding how to tailor learning experiences to meet the needs of every student. In my own work, I love having access to data from a whole campus– we can look at WHO is benefiting and so we can look at equity. For example, are first generation college students having a different experience from those whose parents went to college? We’re combining what we know from learning theory with indicators in the data and other contextual information we know about students at our university — and finding ways to expand opportunities to learn. For example, we might know that students who log in to the campus LMS four times per week get a better grade than those who only log in twice. But simply telling students to log in more often won’t solve the problem — rather, should we be organizing the learning environment to support spaced practice? or do the course activities need to be designed to attract more frequent engagement by diverse students? Learning analytics allow us to ask and answer questions like these, and generate actionable insights into learning.