If Edtech Efficacy Research Ignores Implementation, How Does It Help...

Efficacy

If Edtech Efficacy Research Ignores Implementation, How Does It Help Improve Education?

By Marcy Baughman and Dr. Kara McWilliams     May 17, 2017

If Edtech Efficacy Research Ignores Implementation, How Does It Help Improve Education?

This article is part of the guide: The Personalized Learning Toolkit.

If we really want to understand how effective educational technology tools are for improving learner outcomes, we need to stop throwing the baby out with the bathwater.

As instructors consider technology tools for their courses, they are increasingly looking for evidence of effectiveness. However, researchers evaluating the impact of these technology tools on learner outcomes often ignore a critical component: the user’s local educational setting and how they choose to use the product. This is where implementation science can help.

What is implementation science?

In their 2006 study, researchers Martin Eccles and Brian Mittman described implementation science as “the scientific study of methods to promote the systemic uptake of research findings and other evidence-based practices into routine practice.” What does this mean? We think of it as looking to see if there is a difference between people who “read and follow the directions” when using technology as compared to those that don’t.

Why should we care?

The environment and way in which a digital learning program or tool is used can have a significant impact on its effectiveness. The best digital learning tools are based on educational research and are designed for maximizing the learning benefits when used in specific educational settings. But just think about the variety of those settings. In one college, perhaps the Wi-Fi won’t allow 50 learners to be logged on concurrently. In another, an instructor may be uncertain about how to use the technology and recommends it only for additional practice. Should we expect a learning product to have the same impact in such a variety of settings?

Many educational research studies use an “intent to treat” methodology which assumes that study participants will use the digital program or tool assigned to them, but ignore the variations in implementations that emerge in actual classes and courses. This approach can have unintended consequences in misinterpreting the impact of the product, or how an instructor could get better results by adjusting how he or she uses it. This is where implementation science can yield insights to benefit the instructor (and product developer).

What are the benefits of considering implementation?

Implementation allows us to understand how different use cases of education technology tools influence learner outcomes. Future users of the tool can then consider their environment and needs, and identify an appropriate use case that is going to help them to achieve their goals. Additionally, if a tool is not being used optimally, instructors’ can be offered professional development to drive better results.

What are the consequences of not considering implementation?

Although implementation adds an additional layer of complexity to researching the effectiveness of educational technology on learner outcomes, without it, we risk not identifying significant factors that positively or negatively influence student learning. We also risk not accurately researching what we say we are researching—which adds to skepticism about the value and validity of study results. For example, if we are trying to research the effect of using a technology tool versus not using it, we need to be certain that the technology tool is being used and how.

How do we study implementation?

Big and small data can be used to understand whether recommended best practice of how to use a tool in a specific educational setting leads to better learner outcomes.

The data many digital tools captures can often reveal important insights: for example, whether instructors or students are accessing the program regularly, how they’re using it (e.g. for test or stretch activities), and student performance (on integrated assessments). Platform data can also be used to compare actual usage with recommended usage (an “implementation fidelity metric”) and the associated learner outcomes.

On-the-ground data, such as in-class observations, interviews, or surveys of instructors and learners across different learning environments can provide critical context of how a tool is being used and the variations in usage that can usefully guide our interpretation of platform data.

Currently, this process requires customizing the research tools for every technology tool that is measured, but metrics that are critical to measuring every technology implementation are being developed. A demand from educators to understand “how” a technology product is effective should help drive further research into this important area.

Summing it up

When reviewing impact research—especially when one is considering adopting an education technology tool—be cautious of studies that do not systematically address the implementation environment. Ignoring such local and varied contexts can bias results. The education community should not be satisfied with studies that only compare use versus no use, and should demand that implementation is researched. In other words, don’t throw implementation out with the bathwater!

Dr. Kara McWilliams serves as Senior Director of Impact Research for Macmillan Learning. Marcy Baughman is the Director of Impact Research for Macmillan Learning.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

Next Up

The Personalized Learning Toolkit

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up