CIRCL perspectives offer a window into the different worlds of various stakeholders in the cyberlearning community — what drives their work, what they need to be successful, and what they think the community should be doing. Share your perspective.
Krishna Madhavan is an Associate Professor in the School of Engineering Education at Purdue University.
How did you get into cyberlearning?
My background is in mathematics and computer science. When I was a graduate student, somebody mentioned these systems that can translate what you say into a different language. I speak six languages, so I thought what was the best pathway for me to be helping people translate language? Somebody suggested I look into applied linguistics or computational linguistics. So I read a bunch about it. I went off and did a Masters in German, and ended up writing some language translation code. Then when I came to Purdue, and we had all these international students coming in and working as TAs in STEM courses, and their English language skills weren’t great. The state had a rule (I think or at least the university did) that everybody who was a TA had to go through this test that includes spoken English to evaluate if they are at a level to teach in a classroom. As a TA myself, they made me go take this test. I had to sit in front of this tape machine and record responses, turning it on and off at the right time. It was a real drag. I asked my advisor at that time: How can I change this stuff? She suggested writing some assessment code to help capture and automatically grade stuff, route it cooperatively to raters, and so on. So I said okay, let me learn how to do that. I ended up writing a system that Purdue used to evaluate TAs. And that’s how I got into computer-assisted assessment systems.
As part of that, people were also saying to me: You know, the exams that you’re doing may not have great validity or reliability. So I did a bunch of reading and took some courses in assessment and statistics, and began working on a system that TOEFL and GRE used to evaluate the difficulty of certain items and how to discriminate between various performances. Through this work, I got an opportunity to do an internship at ETS in the statistical processing group and then subsequently in the machine learning group. These were all people that I had read about! I had a chance to learn directly from people who were inventing the next generation of statistical and machine learning methods for learning and evaluation.
Then I started getting interested in technologies to help people learn, in addition to assessing them. Back at Purdue, Mark Lundstrom was starting this thing where people could go online and run simulations. He, some colleagues, and a student wrote what a system called PUNCH, which was a simple website where you could run simulations (e.g., around nanoelectronics) by stepping through pages of forms and entering values. I went to Mark and said you know, there is this great opportunity to bring in other learning technologies that will enable you to transition this research directly into learning. So I got in on the ground floor of working on what became nanoHUB as the first educational technology lead working with Mark Lundstrom and Gerhard Klimeck. As we were doing this, people were asking me, “How effective is this?” Since I knew the methods, after many years of data collection and analysis, we published along with Gerhard Klimeck a Nature Nanotechnology paper where we talked about our findings around efficacy. We also talked about how cyberlearning technologies can cut the transition time from research to education to a matter of weeks to 6 months (depending on tool) as compared to a traditional textbook effort taking 3 or 4 years.
What educational resources did you put on nanoHUB?
I thought we needed a series of really well-produced seminars, lectures, homework activities, and so on, that people could just pull off of the web and start using. They were sequenced and put together by experts, but without much oversight. This was before MOOCs and such, around 2003. We also wanted to connect the activities with real research. If you have a disconnect between what people do in research and in the classroom, then it’s not as exciting for researchers. So we started talking to researchers about how their work would be applicable in a classroom. This led to nanoHUB offering these courses that included higher-end simulations (e.g., around atoms to transistors). We found that people weren’t averse to using the simulations in the classroom. They would log into nanoHUB, not even know they were using computational power similar to a supercomputer, run simulations many times, and then start assigning them as homework in the classroom. Nowadays we offer just-in-time, curated courses on nanoHUB-U. If you really need to know about a new nanophotonics process that has just come out of research, you can take a MOOC-like course in nanoHUB-U to learn the latest. I am one of the Co-PIs now, and lead the education activities and work also with analytics to understand if we’re really having impact. We’ are using analytic techniques to see what people are doing and how they’re doing it.
People would also ask how we knew we were representing good research in the courses. So we wrote a system that did secondary citation calculations to demonstrate that we had a high h-index and such. But as I was working on nanoHUB and had conversations with scholars, it also became clear to me that people really didn’t know what others were getting funded for and why they were getting funded. There was really no sense of community. There also wasn’t enough data out there captured in a systematic way, and even when it was captured, it wasn’t indexed systematically. Many times people would tell us about their grants or projects that we didn’t know anything about, but they were doing similar things. So I thought if I’m having this problem, other people are, too, and maybe I should solve it by writing a resource.
So seeing the lack of awareness of related research led to DIA2?
Yes. I won an NSF grant for a project called Interactive Knowledge Networks for Engineering Education (IKNEER) to develop a system to support the browsing of collaboration networks and products, and lots of people started using it. The site hasn’t been updated for a couple of years now, but it’s still in use because the data there is still useful. It indexes all kinds of papers and articles that people can query and search over. Then NSF formed a subcommittee to understand how a portfolio-mining tools could help them better understand their portfolio of funded research projects. The subcommittee contacted about 15 teams, who were given some limited access to data to build a system. Someone suggested that my team get involved, so we talked with the program officers and they brought us in when there was only about 3 weeks left to prepare a demo. We were at the very end, the last team, and it went well. They had all these questions about the people and projects they were seeing in our demo, but not the technology; it disappeared, as it should. They thought we really had figured it out.
That was 4 years ago, and then we wrote the proposal to NSF to do DIA2 using the NSF portfolio. We had some great reviews, and some people who said it can’t be done, because getting data out of NSF and being able to do this in a systematic way is just impossible so it’s a waste of money. Our program officer, Don Millard, was great; he helped us get access to people at NSF who could answer key questions for us, and we did a lot of data collection there. We built prototypes really fast, and the more we showed users what we had, the more people were like wow, this is very cool, when can we have it?
We think that such a resource can help inform not only what is being done in the community but also how knowledge can propagate and how things can diffuse in the community, allowing for better use of research results and pedagogical materials that come out of projects. We have over two million hits and have served over 150,000 unique queries with DIA2 in the last year, and we have about 29 papers. That’s pretty cool for a research project.
What are you working on now?
We’re thinking about doing a phase 2 proposal for DIA2 to index products that come out of research and allow people to easily go from award to product. We’ve been collecting a lot of data on what phase 2 should be, some of this in collaboration with CIRCL and other NSF resource centers to learn how we can better serve these larger communities.
We also just got a couple of grants. One is to develop and test a contextualized evaluation framework for MOOCs, in conjunction with nanoHUB-U and Boeing. The other is an EAGER grant around the notion of SMART Data. In the research community, people are collecting a lot of data from students and faculty, putting it in data warehouses, doing their analyses, and the papers come out. Eventually there is some trickle down into the classroom, but it is not a huge pipeline; very little makes it back. Our goal is to use all the big data that universities already have–like the trajectory of courses a student has picked, and data from learning management systems records, their advisors reflections– to reduce time to graduation, keep people on track, and cut costs. We’ll be building an analytics framework and also a whole platform that any university would be able to use to put in all kinds of data and for free, start giving value to their students. We are doing some initial planning under the EAGER grant.
All of this work ties into how you design environments. It’s driven by what I learned from NanoHub and other projects. Another common thread is that I’ve always wanted to shorten the time it takes to transfer knowledge. And I always wanted to make research useful to the people who were trying to learn about it.
What would you like to see the cyberlearning community doing more of?
The learning with technology enterprise is very reactive. A new technology comes out and people in education chase after it. It is like Todd Oppenheimer writes in his book The Flickering Mind — every new technology since the dawn of civilization has been the silver bullet for fixing learning. This MOOC business is another silver bullet thing. When is the educational system driving the requirements? When is it going to people and saying you need to design technologies like this? What I would like the cyberlearning community to think more about is how can we be proactive in driving technology. Companies throw technology over the wall – like the iPhone or iPod – and we want to use it in the classroom. One of the few technologies that has come out driven by the education space is clickers.
The notion of co-design plays an important role. Many research proposals say they want to design such-and-such a system to teach X. Who is asking for this? Who is driving the requirements? It is your research interests that you want to build the system, but who cares about it? If you look at highly disruptive technologies, people are actively involved in co-designing it. There aren’t good forums out there where people can voice what they really want. Yes, people might just say “Put a whiteboard in my classroom.” But there needs to be better ways for experts in cyberlearning to engage with people–who may or may not know the new technologies–in meaningful forums where we are truly trying to understand what their workflow is. People understand a lot but don’t articulate it in terms of product design. That is really where the problem is. Let me give you an example: Blackboard. So many universities use it; how many have participated in the design of that system? Almost none. Blackboard will claim that they are talking to many people, but talking is very different from co-designing. And the MOOC business. They are highly inflexible, extremely difficult to work with, and it’s very difficult to get the data out of these systems. So end-to-end, someone is driving the requirements without really engaging stakeholders in the design. If you look at companies like IDEO, what they do is totally and actively engage people. I’m not saying that every decision should be run through a focus group. But if you never really understand what the real pain is in people’s lives, you’re not going to make a big impact.
There may be efforts needed to provide people with an avenue to really articulate their needs. If you have a free choice of any one technology you would design that would make your day go from a grumpy face to a smiley face, what would that be? I think many people would have an intelligent answer to this if you engage them in a serious way. We need infrastructure for co-design of technology to make a serious change in how we design technology.