Massive Study of Online Teaching Ends With Surprising — and ‘Deflating’...

Learning Engineering

Massive Study of Online Teaching Ends With Surprising — and ‘Deflating’ — Result

By Jeffrey R. Young     Jun 17, 2020

Massive Study of Online Teaching Ends With Surprising — and ‘Deflating’ — Result

This article is part of the guide: Better, Faster, Stronger: How Learning Engineering Aims to Transform Education.

MIT professor Justin Reich and several colleagues just completed one of the largest-ever research studies exploring teaching techniques in online higher education, involving nearly 250,000 students from nearly every nation on the planet.

The study, published this week in the Proceedings of the National Academies of Sciences, was meant to show that small behavioral interventions, like asking students in a pre-course survey to describe when and how they planned to fit the required course work into their lives, would significantly improve completion rates in large online classes.

The team thought it would be a slam dunk. Their previous research with smaller numbers of courses and students found impressive results, with the “plan-making intervention” improving completion rates as much as 29 percent. “We thought this study was going to be six to nine months long, that we were going to get similar results and we were going to publish it and be heroes,” says Reich.

That’s not how things went, though. In a large-scale, years-long study with 250 courses running on the edX platform, the pre-planning intervention had no significant impact in overall completion rates. The intervention did correlate with increased course activity for a week or two, but the effect faded out over the length of the course. Other interventions the researchers tested in the experiment also failed to deliver the results found in smaller trials.


For more stories about education research and how learning is changing, tune in to the weekly EdSurge Podcast. Subscribe on iTunes or wherever you listen.


They had hoped to use the large-scale study to offer professors an easy way to improve completion in online courses. Instead, they concluded that more research is needed to understand what kinds of contexts these interventions work in.

Reich called the experience “depressing, frustrating and deflating.”

“That is a cautionary tale that other product developers can use in their product designs,” he adds. The moral of that tale, he adds, is there is probably no easy, silver-bullet questionnaire that can be asked at the beginning of any online course to improve completion, as some learning scientists had hoped.

Considering Context

But that doesn’t mean Reich has given up on behavioral interventions in online learning. After all, he and his collaborators did get significant results in their earlier, smaller studies.

His message is that learning scientists need to pay more attention to context when they test various teaching methods.

Encouraging that kind of research would mean a change in how major funders of learning science operate, though. For instance, guidelines for funding jointly published by the National Science Foundation and the European Institute of Education Sciences encourages what Reich calls “a kind of ladder of funding, from design research to small pilot studies to implementation studies at one site, when it describes what research should get the most funding.”

That’s just what didn’t work in Reich’s latest study.

“What we ought to be doing is, if we find something that works somewhere, ask ourselves what would need to be modified or localized about that intervention in order for it to work somewhere else, so that we can start understanding how contextual variation” plays a role, he says.

The MIT scholar outlined his recent research misadventures and his recommendations in a Twitter thread this week.

Rene Kizilcec, an assistant professor in the School of Computing and Information Science at Cornell University who also worked on the study, also found the experience “humbling and disheartening.”

But he agreed that the takeaway should be to pay more attention to context in learning science research.

“Teachers have long pushed back against these general ideas [about teaching innovation] by saying, ‘But my classroom is different,’” Kizilcec says. “If we embrace that [attitude] too much, then we can throw all the science out and make every classroom a snowflake classroom.”

Instead, he argues, “there’s things that can generalize, as long as we can understand the conditions in which they apply. The problem has been that learning science has not embraced as much the science of context.”

Reich, of MIT, has incorporated the experience into a new book that is due out in September, called “Failure to Disrupt: Why Technology Alone Can’t Transform Education.”

“If you look at how progress in education tech has been made, it’s not through vast disruptive changes—it’s not through brilliant new tech that transforms in a shock,” he says he concludes in the book. “The best work we do in edtech has a kind of tinker’s mindset to it,” he adds, meaning that teachers and developers make small changes to systems like learning-management systems and videoconferencing tools over time that gradually improve online learning.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

Next Up

Better, Faster, Stronger: How Learning Engineering Aims to Transform Education

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up