Moving Beyond the 'Does Blended Learning Work?' Question

column | Learning Research

Moving Beyond the 'Does Blended Learning Work?' Question

By Michael B. Horn (Columnist)     May 20, 2015

Moving Beyond the 'Does Blended Learning Work?' Question

“Does ‘intervention x’ work?” is an age-old question in education. People always want to know if something--regardless of circumstance or how it’s implemented--will work. People often call this “searching for a silver bullet.”

The age-old answer in all social science research, not just education, is that “it depends.” On the surface, that’s a frustrating answer. But in reality, it mirrors how good theory--a statement of what causes what and under what circumstances--works. “It depends” reflects the fact that in order for a theory to be useful, it must take into account all the salient factors of a situation, and allow you to pick the right solution for your particular circumstance so you can know if something will be effective.

The narrative applies to blended learning. As I’ve written numerous times, asking whether blended learning “works” is the wrong question. The answer, of course, depends on how it’s implemented, the learning model, the teachers, who it’s serving, the software and more. Equally important is defining what the problem is--and how to define success in solving that problem in concrete terms. Success metrics could range from whether students were proficient on a state test to whether all students attained mastery of a set of knowledge and skills to whether we were looking to boost engagement and intrinsic motivation.

What is becoming increasingly clear though is that blended learning has enormous potential to personalize learning and boost achievement for all students at scale. It’s this potential that is so exciting, even if it is not a guaranteed home run. And there is research emerging to suggest that this potential is real.

What the Research Says

The Learning Accelerator recently released an initial report as part of its research clearinghouse on blended learning to help further this conversation and add digestible nuance that helps people navigate the research.

The report, compiled by Saro Mohammed, a partner at The Learning Accelerator, starts with the evidence of why personalized learning or individualized instruction can be so powerful, with references to several rigorous studies that have produced sizeable impacts on student learning.

The report includes the following:

  • First, a basic key to help consumers of blended-learning research better understand what that research does and does not say--and move us beyond the overly simplistic question of whether blended learning “works.”
  • Then, the report details eight different types of research designs that exist (such as an analysis of the “Teach to One” math model) and, from this understanding of a study’s research design, a way to think about the likelihood of replicating its results.
  • Finally, the key provides a rating of “alignment” to help readers understand whether the study was directly measuring whether blended learning itself was the focus for improving teaching and learning.

Let’s take a look at one of those eight research studies in depth: an “Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies” provides evidence that “more learning took place in online settings than face-to-face settings, with the most learning occurring in blended [ones].” The study type is a meta-analysis, which is highly rigorous, which means that there is a high likelihood of replication and there is high alignment to the question of the impact of blended learning itself. The only challenge is that most of the studies were in adult learning contexts, the report notes; only five of the studies were in the K–12 context and “there is also no way to tease apart whether these differences were due to the setting alone, or differences in curriculum materials, instructional practices, and learning time, which varied from study to study and were unmeasured.” Understanding the successes and limits of this research is vital in choosing how to further study blended learning’s impact.

A review of a study by SRI International, “Blended Learning Report,” explains that the study had a “Matched-group” design and that it therefore only has “some likelihood of replication,” but is highly aligned around the question of the impact of blended learning. The summary helps us understand that the findings from this research on the impact of blended learning were mixed, but that the “qualitative findings can be used to generate future hypotheses and guide future research, as they shed light on the aspects of implementation that may be related to some of the academic outcomes (especially the negative outcomes) reported in these studies.”

What emerges from the report is that more studies on the impact of blended learning are being produced than is perhaps popularly recognized, and that these studies have an array of different research designs. The different research designs accordingly have different implications for what we can usefully take from the studies.

The diversity of research at this stage is both to be expected and healthy as we move toward an understanding not simply of whether blended learning works, but toward an understand of how it works and under what circumstances. That way, we can best decide how to scale the robust benefits of personalization for all students.

What We Need from the Research World

The Learning Accelerator will add more research to the clearinghouse over time as more work is published and evaluated. This is important; measuring the results from blended learning and drawing appropriate conclusions from those studies is needed so that we don’t fall prey to hoping for more silver-bullet solutions. A range of ecosystem actors—from individual researchers to think tanks and from professional service providers to districts themselves—are leading the research work, but having an entity help us put these studies into perspective in a digestible way--so that they aren’t ultimately about whether blended learning works, but move toward helping educators take the findings and make them actionable in their particular circumstances--is critical.

Michael B. Horn is the co-founder and Executive Director of Education at the Clayon Christensen Institute for Disruptive Innovation (formerly the Innosight Institute), a non-profit think tank devoted to applying the theories of disruptive innovation to problems in the social sector. He is also an official EdSurge columnist.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up