CIRCL perspectives offer a window into the different worlds of various stakeholders in the cyberlearning community — what drives their work, what they need to be successful, and what they think the community should be doing. Share your perspective.
Danielle McNamara is a psychology professor and Director of the Science of Learning and Educational Technology (SoLET) lab at Arizona State University. She develops educational technologies and conducts research to better understand cognitive processes involved in comprehension, knowledge and skill acquisition, and writing.
Congratulations to Dr. McNamara for being selected as a 2018 AERA Fellow!
How did you get started in cyberlearning?
I got started in cognitive science as a graduate student. Prior to that, I had an undergrad degree in linguistics, after I had gone to France and discovered a love of language. I taught English as a Second Language (ESL) for 5 years, and got a clinical degree — and then I discovered that what I loved was cognitive science. It all kind of melded together when I started doing research that had a strong foundation of learning theories with Alice Healy, and learning about reading comprehension research with Walter Kintsch. My ultimate goal always came back to improving learning in classrooms because I had been a classroom teacher.
When I started developing the interventions that I was looking at, what I really discovered was how hard it was to do. Thinking about scaling up an intervention at classroom level seemed impossible. Plus, I wasn’t an education researcher, so it seemed beyond me. That’s how I had the idea to develop iSTART — turning what had been a 1-on-1 intervention and a classroom intervention into an automated tutoring system. As we built, expanded, and refined it, there was all kinds of research involved: Studying the intervention itself, what parts of it work, and the extent that games work better than non-games. The behind-the-scenes part was natural language processing, and that’s where my linguistics background came in. My ultimate goal was to help students who were being left behind by the system. So many researchers focus on students in early grades, so a common question was, “Why don’t you just intervene with early learners who are just beginning in elementary school?” My answers came from what I observed: that students in 9th grade really had been forgotten. The question I’ve always had is: “Can we help them by teaching them strategies to catch up?” And how do we build systems that are automated that overcome some of the social and economic constraints to help students and teachers to provide tutoring. At the same time, I started building NLP tools, because it was so hard to do it by hand. This was the late 90s, and to calculate word frequency, you had to look it up in a book! There were not automated indices for cohesion. We had to create everything from scratch.
So one part of my research is building systems to help teachers and students, focusing on literacy and strategies. The other part is building systems to help researchers understand language and language research, and making tutoring systems public and usable by other researchers. I can’t make enough of an impact alone on what should change and how to improve research and move it forward, so I need to help researchers, too.
You mentioned 9th grade; what other populations do you work with?
I work mostly in middle school, high school, beginning college, and adult literacy. For example, I use iSTART in my classes, we have several high schools using it, and we built a Spanish version that we’re using in South America. But I would like to extend it down. I’ve recently written grants to IES and NSF to build a system for 3rd and 4th graders, and we have been extending strategies that are covered in iSTART so that it is able to help lower-level readers. I think we have fairly good conception of the kind of strategies that 3rd and 4th graders need to learn, too, and we need more research on that. The only automated system I know of for that level is the structure building by Bonnie Meyer. We don’t have an extensive network of systems that go beyond drill and skill.
If I walked into a classroom using your intervention, what it would it look like?
There are multiple interventions — iSTART and Writing Pal — but for both, we encourage blended learning. We have constructed a workbook that gives teachers suggestions on how to blend what the students are doing into the classroom. The classroom might look like students interacting on a traditional computer, and they’ve done a module at home, for example. In the classroom, these systems are not meant to substitute for what a teacher can do. The teacher can bring it home in a more social environment. There are lots of activities, for both reading and writing, that we’ve come up with and tested with students, including games they can play and other activities that bring what they’re learning in the automated environment into the classroom. What we haven’t done is build systems that are automated for the teacher. That’s not really my gig. I focus on helping students understand what students are learning, and bringing that into the classroom.
What new work have you been thinking about?
We need more systems that tackle comprehension for younger readers. The problem space I’m moving into now is multiple document comprehension. We need more research on how students read and understand multiple documents. The traditional way of assessing comprehension is with essays, which confounds writing and reading. I’d like to do research on the processes of how they are understanding these multiple documents, how we can improve their understanding, and understanding relations between reading and writing. Ultimately we’ll be building a system that combines iSTART and Writing Pal, but also expands it out to provide strategy instruction when students are faced with both reading and writing. We don’t know what those strategies will look like; there’s not a sufficient base of literature. When I was building iSTART, we had a good basis of knowledge from researchers like Brown, Scardamalia, and Palinscar. Much less is know about multiple document comprehension. So I’ve written some grants to do basic research, and some to do applied research.
Another system I’d like to build is one that I call the Writing Assessment Tool, which similar to Coh-Metrics, but to assess writing quality rather than reading difficulty. The tool will have 3 portals, one for students, one for teachers that provides assessment to teachers that they can use in the classroom to assess multiple forms of writing, and for researchers to give them indices related to writing. The reason I want to do that is that there is a lot of confusion on the constructs of writing quality versus difficulty of reading. They’re very different constructs. So what predicts whether something will be difficult to read is not necessarily what predicts how someone will judge the quality of that writing.
Who might be your ideal partner(s) in the cyberlearning community?
It depends on the project and what my question is. One of the projects I’m doing is in the medical domain to give doctors feedback on difficulty of their language with the literacy of the patient. This has been a hard project. There have been times when it seems nothing is predicting anything, and then the flower comes out of the mud! We have to build automated indices that take email messages from patients, detect their literacy, and then also detect the complexity of doctors language in email messages, and give feedback. Surprisingly, it might work. I get excited about lots of questions that have to do with literacy, language, strategies, and even general questions on NLP — and just getting researchers and others to pay attention to language. Giving feedback to students based on their verbal input–and getting verbal inputs–is a barrier I’d like people to cross.
What would you like Congress to know about your work?
I want Congress to recognize the need for more educational funding. Across the board, not just for cyberlearning––just recognizing the differential funding that education receives compared to other agencies and domains. The link between education and health is staggering. The demands that are put on teachers and the increasing gaps are staggering. The increasing gaps in terms of SES, race, everything that we’ve been trying to tackle, are just increasing. As people in education research, we really feel like we’re clawing at the wind. I’d want to convince Congress of the importance of education to the United States, and that it requires funding and a different mindset.