The Hard Truths and False Starts About Edtech Efficacy Research

Efficacy

The Hard Truths and False Starts About Edtech Efficacy Research

By Michael Winters     May 8, 2017

The Hard Truths and False Starts About Edtech Efficacy Research

This article is part of the collection: The Personalized Learning Toolkit.

“At best, we’re throwing spaghetti against the wall and hoping that it sticks. Except that we don’t even know what it means to stick.”

That is how Dr. Robert Pianta, Dean of University of Virginia’s (UVA) Curry School of Education, described the current state of efficacy research in education technology, kicking off two days of discussion, debate and collaboration on the topic. The occasion was the Edtech Efficacy Research Academic Symposium, a gathering of nearly 300 researchers, practitioners, entrepreneurs, investors and edtech bigwigs, held in Washington, DC on May 3-4, a partnership between Curry, the Jefferson Education Accelerator (JEA) and Digital Promise.

The event provided a call to arms for those who believe that the efficacy of an edtech product should be the basis for educators’ purchasing decisions. Bart Epstein, Founding CEO of JEA, opened proceedings with a rally cry, stating that the work done at the conference would help “ensure that the billions of dollars spent on education technology are spent on what works, not based on marketing perception.” Keynote speaker Jim Shelton, President of the Chan Zuckerberg Initiative, put it more succinctly: “What works ought to drive what we put in front of our children.”

This Could Be the Start of Something New

The debate over the exact definition of “research” emerged as a key theme of the symposium. Most attendees agreed that the randomized controlled trial (RCT) used in medicine and other industries—studies comparing an experimental group receiving treatment with a control group that is not—should be the gold standard in education technology as well. But many noted the myriad problems that come with conducting an RCT in education.

Of particular note is the time-intensive nature of a traditional RCT. In a flamboyant presentation, Dr. Michael Kennedy, an assistant professor at UVA, described the complex, multi-year process that would be required for him to obtain permission and funding to run, and publish, research on the efficacy of an edtech tool. A former edtech entrepreneur spoke from personal experience: It took seven years for trial results on his tool to be published.

As the discussion evolved, a more systemic problem for efficacy research soon became clear: Educators in both K-12 and higher education are not demanding efficacy research from the tools that they buy.

Dr. Kennedy shared survey results indicating that nearly 90 percent of educators do not insist that a product is backed by efficacy research when making purchasing decisions. Dr. Susan Furhman, President of the Teachers College at Columbia University, further observed that there is no measurable link between a product’s proven efficacy and the amount of sales it musters. Given these challenges, it comes as no surprise that another study undertaken by researchers at the symposium found just 54 percent of surveyed edtech tools had research to back the claims made on their websites.

We’re All in This Together

Researchers were quick to point out that the fault for the lack of research does not lie solely with edtech companies, but rather the process itself; it is impossible for any company to hold their product steady for seven years during a RCT. “The very best researched technologies become obsolete by the time they make it through the long process of peer review,” Dr. Kennedy noted.

Meanwhile, other methods of determining a product’s efficacy are flourishing. Several attendees reported that company case studies describing successful implementations are currently in vogue. Others, including Jessica Heppen of the American Institute for Research, argued that simpler methods like A/B testing, where two similar designs of a product are tested with users simultaneously, can also count as efficacy research, depending on the stage of the product. Additionally, the U.S. Department of Education and LearnPlatform have recently released tools aiming to help districts use their own data to make evidence-based decisions about whether to purchase or implement edtech tools.

Ultimately, the utility and ubiquity of different types of efficacy research will depend largely on the willingness of educators to accept them as valid. On this point, attendees’ opinions were split.

Matthew Rascoff, Associate Vice Provost for Digital Education and Innovation at Duke University, noted that word of mouth should not be discounted as a powerful indicator of what works. “People make decisions by word of mouth in every aspect of their lives. Why should edtech be any different?”

Others believed the focus should be on training educators to demand efficacy research. To this end, one group of researchers announced that they had already contacted at least one national teacher training body that was receptive to the idea of providing additional efficacy training for educators. Calls for professional development around how to understand and use efficacy data were also suggested.

Over the course of the second day, attendees broke into small groups to tackle individual issues, such as how to crowdsource reviews. Despite the numerous to-do’s coming out of the symposium, an immediate, grand solution remained out of reach; but it may be the collegial spirit of the event that counts in the long term. “We can make real progress here, but we have to work together,” Epstein summarized at the conclusion of the conference.

Karen Cator, President and CEO of Digital Promise, agreed. “This is an ecosystem, and we all have a part to play.”

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

Next Up

The Personalized Learning Toolkit

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up