The Government Wants You to Help Find Education Technology Tools That...

Efficacy

The Government Wants You to Help Find Education Technology Tools That Actually Work

By Tony Wan     Aug 24, 2015

The Government Wants You to Help Find Education Technology Tools That Actually Work

Can you prove that education technology tools actually work? And do so without spending years upon years and hundreds of thousands (if not millions) of dollars?

If so, the US Department of Education’s Office of Educational Technology has issued a 36-page Request for Proposal (RFP) “to find support services to evaluate educational software applications purchased with the Elementary and Secondary Education Act (ESEA) program funds.” The mission, should you choose to accept it: create a set of tools to conduct “rapid-cycle technology evaluations,” which include creating guidelines, assessment methods, protocols for setting up the experiments and documenting results. This RFP will self-destruct close on September 3 at 11am ET.

Eliminating tools that are unproductive is important. Given how the US has drilled into trying to evaluate and assess teacher performance over the past five years, it seems like more than poetic justice that edtech tools should be put under the same kind of scrutiny.

That said, products face a challenge similar to evaluations of teacher performance: What makes a tool or a process “work” in education—particularly in K-12—is subject to a host of factors, some of which are entirely outside the realm of the design of the tool. Phil Hill, a blogger and edtech analyst, calls the RFP “almost a good idea.” He notes that any proper evaluation of education tools must also take into account the staff and support structures within the schools. His main critique: “Edtech apps by themselves do not ‘work’ in terms of improving academic performance.”

What the assessment of both teachers—and even tools—leave out is some kind of allowance for motivation. A motivated teacher can inspire and teach students using little more than a chalkboard. Others may simply be content to plop children down in front of a computer or tablet and let the software drive the majority of instruction.

The DOE project has two parts. The first is a planning phase, running from approximately October 2015 through June 2016, for designing the research and approaches to identifying schools and edtech tools to study. Three to six pilot evaluations, each lasting no more than three months, and accompanying reports are also due during this time. The second phase, which runs through January 2017, allows the awardee to refine the research design and conduct up to 12 additional one-to-four-month product evaluations.

There is also a third optional phase, lasting two years, to conduct up to 30 rapid-cycle evaluations per year. The full expected work timeline can be accessed here (PDF).

Richard Culatta, Director of the Office of Educational Technology, writes that the traditional approach to edtech efficacy “does not work well in the rapidly changing world of educational technology.” His main points: research takes a long time, and companies—especially startups—like to move fast. Running multi-year studies is an expensive endeavor and many companies lack sufficient resources to sit by and wait for results. Fast-moving companies iterate often, and their products may evolve different from month to month.

The idea of doing short-cycle efficacy studies is by no means new. The iZone, a unit within the New York City Department of Education, hosts its own “Short-Cycle Evaluation Challenge” to bring teachers and developers together to pilot edtech tools for one semester. In California, two nonprofits—the Silicon Valley Education Foundation and NewSchools Venture Fund—have teamed up to run three-month efficacy trials involving educators and entrepreneurs. And this summer, Chicago-based LEAP Innovations received a $5.1 million Gates Foundation grant for similar efforts.

It’s also not the Department of Education’s first attempt to answer the “what works” question. In 2002, it launched the What Works Clearinghouse, a database of over 10,000 peer-reviewed studies of education tools and services. However, many of the reports are published only several years after the studies were conducted.

The forthcoming Rapid-Cycle Tech Evaluations project will “establish a standard for low-cost, quick turnaround evaluations of apps, and field test rapid-cycle evaluations,” Culatta writes.

Can research keep pace with how businesses operate? Aligning these processes will surely be a major challenge for this project. Devising a feasible medium is critical not just for the industry, but also for the millions of students and teachers who shouldn’t have to wait years before finding out what works.

Equally important is finding the right mechanism to surface the findings so that stakeholders have better indicators of a tool’s effectiveness beyond star ratings and word-of-mouth opinions.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up