Reality Is Messy, Labs Aren’t: How to Make Research-Backed Education Work

Opinion | Efficacy

Reality Is Messy, Labs Aren’t: How to Make Research-Backed Education Work

By Nathan Martin and Jay Lynch     Apr 1, 2017

Reality Is Messy, Labs Aren’t: How to Make Research-Backed Education Work

If education researchers hope to see more of their findings influence everyday learning and instruction—and they desperately do—then their best bet may be to encourage education technologists to hone their design research skills.

Researchers frequently lament how little of even the most robust and replicable educational research permeates actual teaching and studying. But the challenges and constraints of practical educational settings mean laboratory-based findings don’t readily translate into the kinds of practices, resources and tools that can meaningfully improve teaching and learning. If research is to yield real-world solutions, it will take teachers, students, researchers and technologists working together to dabble, invent and test new ideas. Edtech companies, many of which already partner with teachers and students early in the product design process, are uniquely suited to facilitating the type of research necessary to bridge the gap between academia and the classroom.

Why the Research Black Hole?

Most interventions in educational research are not designed with educators in mind. Learning scientists rarely conduct studies specifically intended to identify methods and materials suitable for teachers and students. When research is planned, often little time is spent considering classroom limitations and the obstacles to implementation.

Yet simply knowing that a particular studying technique is a powerful learning strategy doesn’t mean it is easy to put into practice. Take, for example, distributed practice, the process of studying material in short sessions spaced over an extended period of time. This is perhaps the longest known, most well-researched and consistently supported method for improving retention of information. Yet, despite the mountain of evidence currently available, and incessant calls from the towers of academia to apply it in classrooms, there is little evidence of widespread adoption.

Why is this so?

An educator may be aware of the efficacy of spacing, but be unable, for practical reasons, to bring it into the classroom. There often is little support for the curricular or policy changes required to allow teachers to devote more instructional time to creating rich and varied experiences. Resources can be lacking to aid educators in the challenging task of devising multiple creative and engaging ways to expose students to the same concepts. And then there is the challenge teachers face to convince students, parents and administrators of the value of increasing difficulty and discomfort while slowing the pace of instruction.

These are no trivial obstacles to overcome. The result is that even the most powerful instructional interventions, which on their face appear so absurdly simple and low-cost to implement, continue to gather dust in education journals.

The yawning disconnect between the world of educational research and the bustling milieu of the classroom manifests in other ways as well. For example, research in educational psychology often is characterized by efforts to investigate isolated interventions, in carefully controlled laboratory conditions, under short time frames, using motivated participants and focusing on learning outcomes like retention. As a result, many findings in education are quite fragile, disappearing outside of the conditions of the original experimental setting. This approach clearly is not ideal for improving our understanding of what interventions are most likely to improve learning in realistic settings with actual students.

This, of course, is not a problem unique to education. In fact, a frequent response by scientists to the current replication crisis in psychology has been to suggest that failures to replicate are largely attributable to subtle contextual differences in experiments, not necessarily a failure of the original theory. That is, small changes in elements such as intervention settings or implementation fidelity can dramatically change the findings.

Yet as science writer Ed Yong notes, “If the results are delicate wilting flowers that only bloom under the care of certain experimenters, how relevant are they to the messy, noisy, chaotic world outside the lab?” The true measure of an educational intervention’s value, researchers Alan Cheung and Robert Slavin write, is what happens when it is “implemented at a large scale under ordinary circumstances.” And unfortunately, many popular ideas in education today, including “growth mindset,” have come under fire recently for failing to replicate in realistic settings, prompting defenders to respond that their interventions are highly dependent on context and delivery, making it unlikely non-experts can reproduce their successes.

Consequently, there needs to be greater focus on the creative and neglected task of investigating and developing strategies, resources and products designed to empower educators and students to benefit from the findings of the education research given the constraints, contexts and complex realities of actual educational practice. That is, an effort to convert robust research findings into useful tools, rules of thumb, and imaginative resources able to meet teachers and students where they are.

Expanding the Educational Research Toolkit

This pragmatic and messy process will require practical trade-offs and nuance of the kind educational researchers often eschew. For instance, maybe a learning strategy with a large reported effect size will not be most effective in a particular classroom because students are unmotivated or lack necessary prior knowledge.

Botanist George Washington Carver took his academic research and translated it into theories that he would test in the fields. On the basis of those findings, he published his results and engaged directly with farmers to promote change in traditional farming practices. Education research needs a bit more of that kind of hands-on approach.

What is being affirmed here is the value of an intermediary role between rigorous scientific research and applied research: a role akin to learning engineers whose task is to take research findings and tinker with them in ways that result in designs that reflect the artful, nuanced and embedded peculiarities of real-world educational settings. Perhaps the most promising tool for achieving this goal is design research, where the primary goal is not to generate new knowledge, but to construct useful tools based on current understanding.

A number of edtech companies now have learning design teams whose responsibilities overlap with many of the tasks outlined above. These teams often work with students and educators at early stages of product development to explore how an educational intervention is used and perceived in practice rather than theory. Through careful observation and collection of qualitative feedback, these teams can generate powerful insights that can nimbly modify product designs and also inform researchers.

The careful balancing act of responding to the preferences of educational consumers and learners, while remaining cognizant of the robust findings of educational research, can be a challenging one. The most engaging and enjoyable educational products are not necessarily the most efficacious. Yet technologists should embrace this important tension and work to validate and extend the promising findings of educational researchers to actual classrooms and products.

With these comments in mind, we offer the following suggestions:

  • Work closely with researchers, teachers, and students when conceiving, designing, and studying products to ensure that they satisfy the needs and constraints of those involved in the educational process.
  • Embrace the messy world of educational instruction and conduct studies that look for intervention effects capable of breaking through the din of the real-life classroom. They must not be so delicate as to be unable to be replicated across different settings or in low quality implementations. These are likely to be more realistic estimates of an intervention's effect.
  • Conduct research that investigates the effects of interventions more holistically and programmatically. For example, look not just at a single school but at a multitude of disparate schools. Investigate an intervention not in a single class but across a course sequence.
  • Start with research designed to address the questions and challenges that are most important to educators and learners. (Strategic Education Research Partnership is an excellent example of this approach). There is a critical need to better understand the most pressing needs of education decision-makers and prioritize design efforts accordingly.

Moving Educational Research Forward

Surveys reveal that purchasers of edtech products rarely make decisions based on the rigorous evidence. They are often motivated instead by informal tryouts and peer recommendations. We believe this to be an astounding moral failure.

The idea of pediatricians selecting treatments for children based on anecdote and intuition rather than evidence would be viewed as a serious injustice. As a society, we have a similar obligation to ensure that the educational products we prescribe for our children are not based on fads or wishful thinking.

But while education decision-makers have only a minimal influence with respect to improving the disconnect between current educational research and the classroom, they do have the ability to demand changes in how educational research is done within edtech. There currently is a remarkable opportunity to drastically improve the relevance and usefulness of education research by pushing the edtech industry to adopt better research practices.

However, the changes that we’ve advanced will make research more costly, time consuming and challenging, so it is unlikely that many edtech companies will adopt these methods unless there is market pressure to do so. And it remains an open question whether edtech consumers are willing to pay the higher costs required to obtain more rigorous evidence of educational efficacy. Much as in the field of evidence-based medicine, quality education research will only be carried out when consumers “demand better evidence, better presented, better explained, and applied in a more personalized way with sensitivity to context and individual goals,” as Trisha Greenhalgh and colleagues have suggested.

Ultimately, the success of evidence-based education will depend on edtech, academia, parents and educational decision-makers working together to demand and conduct research specifically designed to serve the needs of learners and teachers.

Nathan Martin is a manager for efficacy and innovation at Pearson in the Office of the Chief Education Advisor. Jay Lynch is Senior Academic Research Consultant for Course Design, Development, and Academic Research (CDDAR) at Pearson.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up