Nine Questions for Evaluating Education Innovation


Nine Questions for Evaluating Education Innovation

Defining a rubric to set clear expectations for educators and entrepreneurs alike

By Tony Wan     Jul 23, 2013

Nine Questions for Evaluating Education Innovation

"Alive in the Swamp" sounds like a cheesy survival horror flick. But it’s also an apt description for those entrepreneurs and educators slogging their way through edtech jungle, trying to figure out how to best build, deploy and assess technology for the classroom.

It’s also the title of a new report from Nesta and NewSchools Venture Fund, which offers a nine-point rubric to define what “success” in education innovation should look like. The work is based on a recent book by Michael Fullan, Stratosphere: Integrating Technology, Pedagogy, and Change Knowledge, which emphasizes that any sustainable efforts to impact education at scale must also focus on pedagogy and system change, in addition to technology.

In the report, authors Michael Fullan and Katelyn Donnelly (executive director of Pearson's Affordable Learning Fund) offer an Index that could serve as a research framework upon which developers and school administrators can evaluate the effectiveness of new products and implementation practices. The framework assesses innovations along three components--pedagogy, technology, systems change--and breaks them down into finer detail:


  • Clarity and quality of intended outcome (is it clear what the learning outcomes are? How are they measured?)
  • Pedagogy itself (What research is it based on? Has it been applied successfully elsewhere?)
  • Quality of assessment platform (How detailed is it? Does it offer actionable insights?)

System Change

  • Implementation support (How does the provider support technology and teachers?)
  • Value for money (What are the cost savings for the school? Are there hidden costs?)
  • Whole system change potential (Can it scale virally and laterally across teachers and schools? Does this require extensive management from the center?)


  • Quality of user experience/model design (How easy is it to use?)
  • Ease of adaptation (Is it accessible via different devices and means?)
  • Comprehensiveness and integration (How does the technology integrate with the learning environment in the classroom?)

In the appendix, the authors suggest using a four-point, color-coded grading scale and offers examples of what "good" and "bad" for each of these criteria may look like. Donnelly admits there is a degree of subjectivity when it comes to assigning grades on these questions, some of which are very open-ended.

The authors applied this rubric to a dozen products and schools, including Khan Academy, LearnZillion, Rocketship and Carpe Diem. Their initial findings suggest pedagogy and system support "are the weakest part of the triangle," especially as "entrepreneurs find it more exciting and absorbing to design and build digital innovations than to grapple with a new pedagogy, not to for implementation."

Unfortunately, the authors have no plans share those specific ratings, which could serve as a useful guide for how others could apply the grading system.

Still, Donnelly reiterated that the point of the rubric is not to serve as a standard for objective reviews and ratings, but rather to "spark conversation among school leaders, principals, teachers, and education entrepreneurs."

While far from perfect, establishing a standard set of expectations and questions for schools and developers could be the critical first step towards building solutions that can impact at scale.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up