CIRCL perspectives offer a window into the different worlds of various stakeholders in the cyberlearning community — what drives their work, what they need to be successful, and what they think the community should be doing. Share your perspective.
Alyssa Wise is Associate Professor of Learning Sciences and Educational Technology in the Steinhardt School of Culture, Education, and Human Development at NYU Steinhardt. Her research is situated at the intersection of the learning sciences and educational data science, focusing on the design of learning analytics systems that are theoretically grounded, computationally robust, and pedagogically useful for informing teaching and learning.
How did you get started in cyberlearning?
I was a chemistry undergraduate and wanted to expand my horizons after college. I really always loved learning about and problem-solving in science, but found the long hours alone in the lab isolating. When the opportunity arose to teach high school science in South America, I jumped at the chance. I ended up with an assignment teaching high school IB physics. This was still relatively early days for computers and internet in the classroom, and not a lot of people were comfortable using technology to support their teaching, but I was — so I was often asked to share my experiences and ideas. While I was operating on what turned out to be pretty good instincts, I knew there was a more disciplined way to go about using technology to support teaching and learning. So my interest in cyberlearning originally came from a K-12 classroom perspective, recognizing that there were opportunities to improve how technology was being used.
I pursued my doctorate at Indiana University Bloomington with Thomas Duffy, and then interned at SRI’s Center for Technology in Learning, working on the Nanosense project (with CIRCL Co-PI Patti Schank). In NanoSense, my two interests — a love of science and a love of technology — repeatedly wove themselves around each other. After receiving my PhD in Learning Sciences, I took on a faculty position at Simon Fraser University where I built up a research program in learning analytics and computer-supported collaborative learning. While this work does not focus on science education per se, I draw on my previous lab-based education in thinking through the ways that learning analytics needs to instrument and design useful ways to collect and analyse multiple data streams. If you think of cyberlearning as the fusion of cyber (technology) and learning; learning analytics is similarly the the fusion of data science and learning.
How did you get started in learning analytics?
I went to a workshop at the learning analytics conference somewhat serendipitously, through an invitation from Dan Suthers, who I met when I was at SRI. I presented some of the work I was doing to conceptualize and track learners’ “listening” patterns in online discussions (the ways in which they attended to the existing textual comments of others), and quickly realized that I was basically already doing learning analytics research even though I hadn’t yet used this label for it.
The work I was doing at the time was on the E-Listening research project (funded by the Social Sciences and Humanities Research Council of Canada). Previous work on collaboration in online discussions had primarily focused on the comments people contributed to the conversation, while ignoring the precursor of how people were accessing and interacting with comments that were already there. This listening turns out to be incredibly important both because a conversation in which everyone talks and nobody listens isn’t really a conversation at all. In addition, listening activity accounts for the vast majority of time people spend in the discussions. So we started figuring out how the data traces in the log-file data could help us understand online listening as a central element of computer-supported collaborative learning (CSCL).
At the time, we were initially doing descriptive work that identified and portrayed patterns in listening behavior. Our primary approaches were computational in nature, using techniques such as cluster analysis and multi-level modelling. We also performed microanalytic case studies using clickstream data to recreate people’s pathways through the discussions. I think this balance of quantitative analyses that can address large quantities of data and carefully chosen qualitative analyses that look in detail at some subset of the data is incredibly important; we wouldn’t have found the patterns without the computational work, but we wouldn’t have understood what they meant without the case-studies.
At the workshop, a researcher active in the field of learning analytics suggested I take things a step further by showing our analyses back to students so they could better understand and direct how they listened online. That suggestion really affected the project by giving it an applied focus: after collecting and processing the listening data how could we usefully show the data back to students? This is more complicated than it sounds since analytic metrics for students need to be clear and easily interpreted (and thus simpler than many of the metrics we used in our research). We did a “Wizard of Oz” study in which we collected listening data on a weekly basis, quickly processed it, and presented the traces back to students right away to help inform their learning activities while they were still in process. This is known as “closing the loop” in the learning analytics cycle. Our work on the E-Listening Project is described in the first 5 papers listed below.
What do you want people to know about learning analytics?
The most important thing for people to know about learning analytics is that it doesn’t magically produce answers. There is so much data available nowadays (not all of it particularly useful by the way) that I think people sometimes have a false sense of learning analytics as a set of fully automated techniques that turn raw data into meaningful information. In reality there are three critical things that are often overlooked:
- The importance of human decision making. There are a large number of choices that need to be made in cleaning and analyzing data that have critical influences on the results you get (e.g. what data do you include and exclude, how do you deal with missing data points, how do you craft features from the data, what algorithms do you use and what parameters do you set for them). For example, while high level reports may simply say that a cluster analysis was performed, there is more than one algorithm that can be used to cluster and different criteria that can be applied for aggregation. Equally important, the computation can tell you the optimal solution (groupings of cases) for a given number of clusters, but a human must decide how many clusters most usefully represents the data (with the help of various indices). These choices can have profound impacts on the findings and interpretations that result.
- The importance of conceptual framing. Learning analytics that are meaningful go beyond simple reports of “this input predicts that output” to help explain how and why relationships occur, how new findings expand and build on what we know about how people learn, and how we might take action to improve things. It’s one thing to say this data shows this pattern, and it’s quite another to start to have some explanation or plausible hypothesis as to why things are happening and what to do about them. A critical element in learning analytics is using the insight provided by the data to take action on a system in some way, and this often requires more than just prediction. That’s where theory becomes really important in helping you ask valuable questions of the data. For example, you might see that students who use a learning management system (LMS) less are likely to do worse on a final exam, which isn’t really particularly surprising. But theory lets you ask: Are there certain kinds of engagement with the LMS (e.g. viewing resources or answering practice questions) or certain patterns of use (e.g. that which is concentrated or distributed over time) that are more beneficial than others? Equally important, theory can offer hypotheses for why students aren’t using (certain features of) the LMS that can be tested and, if correct, used to improve the system.
- The importance of understanding the data in depth. A final thing to be aware of is that part of the way we come to understand patterns in the data and what to do about them is by understanding in depth how the data was generated and what activity it is taken to represent. For me, that means that in many of the studies in which I’m taking a computational approach, I’m also going deep into the data manually. Obviously there’s so much data that you can only do this with a portion of it, but especially if you have data that is rich, digging in can really help you understand what’s going on. So for example in the E-Listening project, when we went beyond the aggregate measures used to find the clusters to examine in details what individuals who fell into each cluster did, it substantially changed our understanding of what actual kinds of activity the aggregated patterns represented and thus, our characterization of the clusters. In the more recent work that I’m doing on MOOCs, we’ve found that beyond creating a predictive model based on linguistic features or a social network analysis diagram based on interactions, going back to the actual comments students contributed helps us understand what is going on and the relationship between the enacted learning phenomena and the data patterns that result. In short, computational methods and in-depth interpretation — humans and machines together — are powerful and complementary approaches to working with data. Our work on the MOOCeology Project is described in the latter 3 papers listed below.
How can learning analytics transform learning?
For analytics to truly have an impact that transforms teaching and learning, we need to pay more attention to the many social factors that surround data use. That includes the specific issues I’ve talked about earlier such recognizing the factors that go into analyzing, interpreting and acting on data, but also the larger issues of data stewardship, algorithmic transparency, and ensuring ethical use. Many of the advances in the field thus far have been technical in nature, and we need to catch up in thinking about the social side of things.
One characteristic that makes learning analytics distinct as an area of cyberlearning research is that the data processing isn’t being done (solely) for the sake of building understanding in a larger sense, but also with the intent to impact the specific learning activities that generate the data. This means that in addition to long cycle of impact in most research you also have a short cycle of more immediate impact. For example, as collaborative learning analytics develop we could collect data on the conversation we’re having right now and use it in real-time to examine how our conversation is going and then take action to adjust it as part of a cycle of self-regulated learning. Note that to evaluate “how our conversation is going” we implicitly need a reference point of how we want it to go. This is part of the contextualization of analytics to meet the needs of specific pedagogical contexts. For example, this conversation has mostly been you asking a question and me giving a somewhat lengthy response before you ask the next one. Thus, it may have low levels of negotiative moves and coherence across turns of talk compared to what we would desire for many collaborative learning scenarios; however, this is entirely appropriate for an interview context. Contextualization in use of analytics (rather than a blind adherence to the idea that “more is better”) us what we need to strive for. The power of learning analytics will come from the juxtaposition of creating useful data traces, being able to interpret what they mean in specific contexts, and figuring out ways to integrate this process of data use into educational activities in ways that support, rather than disrupt or distract from, learning.
References
E-Listening Project
Wise, A. F., Speer, J., Marbouti, F. & Hsiao, Y. (2013). Broadening the notion of participation in online discussions: Examining patterns in learners’ online listening behaviors. Instructional Science. 41(2), 323-343.
Wise, A. F., Hausknecht, S. N. & Zhao, Y. (2014). Attending to others’ posts in asynchronous discussions: Learners’ online “listening” and its relationship to speaking. International Journal of Computer-Supported Collaborative Learning, 9(2) 185-209.
Marbouti, F. & Wise, A. F. (2016) Starburst: A new graphical interface to support productive engagement with others’ posts in online discussions. Educational Technology Research & Development, 64(1), 87-113.
Wise, A. F., Zhao, Y. & Hausknecht, S. N. (2014). Learning analytics for online discussions: Embedded and extracted approaches. Journal of Learning Analytics, 1(2), 48-71.
Wise, A. F. Vytasek, J. M.,Hausknecht, S. N. & Zhao, Y. (2016). Developing learning analytics design knowledge in the “middle space”: The student tuning model and align design framework for learning analytics use. Online Learning, 20(2), 1-28.
Wise, A. and Vytasek, J. (2017). Learning analytics implementation design. In Lang, C., Siemens, G., Wise, A. F., and Gaevic, D., (Eds.), The Handbook of Learning Analytics (1st ed) (pp. 151-160). Edmonton, AB: Society for Learning Analytics Research (SoLAR).
MOOCeology Project
Wise, A. F., Cui, Y., Jin W. Q. & Vytasek, J. M. (2017) Mining for gold: Identifying content-related MOOC discussion threads across domains through linguistic modeling. The Internet and Higher Education, 32, 11-28.
Cui, Y., Jin, W. Q. & Wise, A. F. (2017). Humans and machines together: Improving characterization of large scale online discussions through dynamic interrelated post and thread categorization (DIPTiC). In Proceedings of Learning at Scale 2017. Cambridge, MA: ACM.
Wise, A. F., Cui, Y. & Jin, W. Q. (2017). Honing in on social learning networks in MOOC forums: Examining critical network definition decisions. In Proceedings of the 7th International Conference on Learning Analytics and Knowledge (pp. 383-392). Vancouver, CA: ACM.