How We Can Define and Improve Edtech Product Efficacy

Efficacy

How We Can Define and Improve Edtech Product Efficacy

By Muhammed Chaudhry     Mar 24, 2015

How We Can Define and Improve Edtech Product Efficacy

This article is part of the guide: Measuring Efficacy in Edtech.

As edtech products continue to flood the education space, it has become increasingly difficult for educators to distinguish those with strong potential to deliver improved student learning from the slick marketing pitches that seem to offer more hype than substance.

So many developers of courseware are pushing products – some school districts field up to 20 sales calls a day from vendors – that school officials are overwhelmed and find it hard to discern which products will click with students and teachers and hopefully end up benefiting learning.

Because education technology products are relatively new in offering a platform where more data can be collected and measured, so is testing and evaluation to measure what works and what doesn’t. There are no uniform or clear-cut guidelines for users. There is no industry standard for quality products to get the most effective tools to students that address critical learning problems and gaps. Each school district, school and teacher use different approaches. As the field grows, this needs to be addressed.

Fifth grade math teacher Ada Lee, whose students last year worked with the program MathSpace, tapped into What Works Clearinghouse and Lea(R)n, two resources that helped her identify student needs, problem areas, solutions and student improvement. WWC and Lea(R)n offer information and evidence from studies of the effectiveness of programs and practices. They are two of a handful of companies that provide efficacy metrics. WWC, a service run by the U.S. Department of Education, reviews education research to share the best research-backed strategies with educators. Learn Trials is a platform that uses aggregated teacher feedback to support teachers and districts in determining which products to use.

Lee, who teaches in Cupertino, was part of the Silicon Valley Education Foundation’s Learning Innovation Hub (iHub), which matched her and other Silicon Valley teachers with edtech vendors to introduce products to students and give feedback while creating their product. The iHub program creates a short rapid-feedback cycle loop for edtech companies to measure product efficacy and generates feedback directly from classroom teachers to improve the product while it’s being tested.

Lee used several factors to measure efficacy around MathSpace – developed by SVEF in conjunction with WestEd and Lea(R)n – that included ease of use, student engagement, individual student needs, teacher training and interest, vendor engagement and length of trial. She also conducted pre- and post-assessment of students, reporting that her students at the end of the cycle, and using other testing, experienced “significant academic growth.”

Even a student with ADHD who struggled with math improved with Mathspace. “The program helped me as a teacher identify his learning gaps. That was a great measure of success for both him and me. He became one of my star students,” Lee said.

Despite these successes, product testing continues to present challenges. With so many products on the market, there never has been a more critical need for solid and consistent testing. Karl Rectanus, co-founder & CEO of Lea(R)n, whose LearnTrials.com platform assists SVEF with product evaluations, equated the flood of products and overwhelmed school officials with “the Wild West,” with teachers, principals and administrators working on their own to make decisions for their own needs or students. “The new frontier holds tons of promise, but most school district communication and management systems aren’t built to manage and educate all decision-makers about which products work best for which students.”

With SVEF’s iHub program, we continue to focus on quantifying student learning in partnership with research organizations WestEd and Lea(R)n. Using the Learn Trials platform allows teachers to measure specific aspects of a product’s effectiveness, as teacher Ada Lee discovered. Using several data points during product testing gives entrepreneurs an idea of how their product stands in the market. Teacher input, especially over a short period of time, gives developers swift feedback to make adjustments in real time. Student input is captured by protocols developed by WestEd, adding another viewpoint. WestEd crafts reports detailing recommendations at the end of the round, aggregating the data collected from the Lea(R)n platform. Over time, this iterative feedback loop measures not only efficacy of a product, but it also improves the efficacy of a product, and it does it quickly.

Product testing is valuable in exposing educators to promising solutions and it educates companies to the challenges teachers face. SVEF emphasizes teacher feedback in measuring product efficacy because teachers know the realities of implementing edtech in the classroom. By measuring student achievement, teacher support and satisfaction, and student engagement --as defined by the Gates Foundation, a funder of our work -- SVEF, in partnership with WestEd and Lea(R)n, determines the effectiveness of a product. Without teacher and student buy-in and measurable results, the sheer number of products in the ecosystem creates an overwhelming problem to distinguish what is effective. Through measuring these key factors through teacher scoring on a rubric Lea(R)n developed, we can identify promising products and grow the companies while including teacher and student voice.

Muhammed Chaudhry serves as President and CEO for the Silicon Valley Education Foundation (SVEF).

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

Next Up

Measuring Efficacy in Edtech

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up