'Expanding Evidence' Report Tempers Research With Design

'Expanding Evidence' Report Tempers Research With Design

The second of a three-part series analyzing the 'Expanding Evidence' Report

By Andrew Plemmons Pratt     Mar 11, 2013

'Expanding Evidence' Report Tempers Research With Design

Late in December, the U.S Department of Education’s Office of Educational Technology dropped a 100-page draft policy report on “Expanding Evidence Approaches for Learning in a Digital World.” While a key focus of the report is on the kinds of information that we should marshal to evaluate learning technologies, the more important lesson of the document is about people. Through case studies and reviews of current research, the report makes a lot of recommendations, but three stand out.

This review—Part II in a three-part series—highlights one of those recommendations: the need for design thinking and continual improvement processes when building digital learning tools. For more background on “evidence-based” research and a deep-dive into another major recommendation—the notion that technologists, educators, and researchers must collaborate across their respective fields—refer to Part I of the series.

Design-Based Research Is What Collaboration Should Look Like

Flip to the final sections of the report and you’ll find the “Evidence Reference Guide,” which helpfully summarizes the maze of different research approaches described in the preceding 75 pages. The format is straightforward: for each research approach, there are example questions a developer, researcher, or educator might face; this is followed by an “evidence approach” for gathering data; an explanation of what that “resulting evidence” reveals; and some proposed “uses” for feeding that evidence back into a solution.

In describing the “Collaborative Design” approach to evidence-gathering, the first sample question is a perfect example of the situation in schools around the country grappling with how to leverage technology to improve student achievement: “How can this digital learning resource and the classroom activities it will be embedded in be designed to promote the targeted learning outcomes?” Awkward wording, but a question anyone working in edtech must ask.

Next in the chart is the suggested “Evidence Approach” for answering this question: “Co-design new digital learning resources and implementations through collaborations of teams of developers, education researchers, and individuals from the intended group of users (often teachers).” That’s right: get a whole bunch of people together to work across disciplines to create something from scratch and figure out how to implement it. It’s just that simple, eh?

Granted, this is a mere summary of approaches described in depth in the sections before it. But the “co-design” approach described in that single sentence encapsulates a terrifically rare and difficult sort of collaboration. It is the kind of interdisciplinary work that leads to well-engineered computers, airplanes, and office buildings. While it is not without precedent, design research is both fundamental to what educators do everyday, fundamental to the invention of effective education technologies, and scarcely understood as a formalized approach to education innovation.

The principles of design research go by a handful of similar names. The design consultancy IDEO brands its work in the field “Design Thinking for Educators.” Their approach is adapted from industrial design and engineering research, where rapid prototyping and iteration leads to practical solutions for tangible problems. The “Expanding Evidence” authors take the alphabet soup approach and adopt the term “Design-based implementation research” or DBIR. Drawing on the work of William R. Penuel (of SRI International), they define DBIR as an approach to crafting educational interventions that are usable, scalable, and sustainable. Harkening back to the issue of speed in evidence-gathering, they explain that DBIR was “developed in response to concern that research-based educational interventions rarely are translated into widespread practice and that studies of interventions in practice put too much emphasis on implementation fidelity and not enough on understanding intervention adaptation” (p 19).

So what is it? To see the design research in action, let’s turn to some case studies.

From Piecewise to Continuous Improvement

The fifth report recommendation states that the people who use digital learning resources should work with education researchers “to implement resources using continuous improvement processes” (p 89). This kind of continuous improvement based on data is baked into the approaches many developers and technologists bring to the field. A case study sidebar describes how Khan Academy uses A/B testing to gather information on which of two proposed adjustments to its assessments provide better insight into learner mastery. In this instance, the data mining is not a one-off approach to improvement; it is one of a variety of data-driven methods used to make continual improvements to the system. It’s an approach explicitly championed in the report’s introduction and revisited throughout: “Education practitioners will think about their activities as cycles of implementation, data collection, reflection, and refinement and constantly seek data and information to refine their practice” (p iii).

A prime example of this always-testing and always-improving process can be found in the Carnegie Foundation-initiated Statway project mentioned in Part I of this review. In an attempt to double the number of students who earn college math credits within one year of continuous improvement (p 21), the schools involved in the project agreed to collaborate with one another, with researchers and developers, and with those who implemented the new programs (teachers). Moreover, they agreed to share the data they gathered and then discuss how to refine the implementation. After a small first iteration of the project produced lackluster results, a team redesigned the course, and a new version rolled out across the entire network the next school year. In the first year of Statway at the participating colleges, three times as many students earned a college math credit in one-third the time compared with historical averages.

The report authors interviewed Louis M. Gomez of UCLA, a Statway collaborator, about whether the networked schools had conducted an “efficacy study” comparing the new project with traditional methods. “Efficacy studies” fall within the domain of academic education research, and test whether an intervention can achieve a desired effect under ideal conditions. They are in essence the polar opposite of design-based research, which deals with real-world situations in all their messy and imperfect glory. Gomez’s reply is telling:

"All kinds of promising interventions are subjected to RCTs [randomized controlled trials, the “gold standard” of academic education research] that show nothing; often because they’re subjected to [experimental studies] too early. Equally important to work on is getting your intervention to work reliably across many different contexts. This is more important at this point than understanding whether Statway works better or worse than some other approach."

If design-based research aims to create solutions that are usable, scalable, and sustainable, then Gomez’s point here about getting an intervention to work reliably across a variety of contexts is important because it underpins the scalability of an intervention. Moreover, the process of continuous data gathering that informs cycles of improvement is fundamental to making an intervention sustainable. Tony Bryk, president of the Carnegie Foundation, argues that this work “should be structured as many rapid iterations of small changes, what he calls ‘rapid iterative small tests of change’” (p 23). These small changes can be “implemented quickly, can be tested repeatedly in multiple contexts to make sure they are really improvements, and are unlikely to do harm.”

We should stop and emphasize that this process of data-driven continuous improvement is what highly effective teachers do in their classrooms everyday. Such teachers adapt their teaching to make it useful for everyday instruction and they gather data (usually as formative assessment results) to make constant improvements to their own pedagogy. In high-performing schools, teams of teachers collaborate to scale effective methods across departments and buildings.

Stumbling Blocks are Actually Building Blocks

Everyone working in education--not merely education technology--needs to understand the importance of rapid-cycle iteration to gathering evidence and improving any sort of intervention. The tendency in schools to try a product, project, or approach, find that it doesn't work as anticipated, and then throw up hands and say “We’re not doing that again!” is not only a waste of time and resources, it’s a lost opportunity for educators to learn about how to improve an intervention and build it again better--a necessary cycle of innovation to creating effective solutions for students.

Policy makers and grant makers need to fund and champion work that incorporates this design thinking approach to implementation. The report’s fifth recommendation states: “Even for a mature intervention for which extensive prior research has demonstrated effectiveness, education stakeholders should consider ongoing data collection and reflection as long as the stakes are high” (p 80). As long as the goal is to get students ready for college and careers, the stakes are high. Therefore, ongoing analysis and reflection are imperative and fundamental elements of education innovation.

In each of these examples, the design-based approach to continuous improvement involves multiple stakeholders sharing and reflecting on many kinds of evidence. But in common practice, designers, researchers, technologists, and teachers work in silos. Or at least, they hang out all too infrequently. Innovators who can successfully cross disciplines, on the other hand, have the potential to build powerful education interventions by combining evidence and problem-solving approaches from different professional communities. Part III, the final portion of this review, highlights steps each of these groups can take to build a network to replace the silos. The process starts with reading the same research. Research like the “Expanding Evidence” report.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up