Educators give feedback on products at EdSurge's 2014 Silicon Valley Summit / Greg Becker
Here at EdSurge, we’re all about getting people the information they need to make informed edtech decisions. So when we began hosting our Tech for Schools Summits in November 2013, we didn’t want to create just another conference. We wanted to put teachers at the helm of decision making by giving them plenty of time to explore products, talk with entrepreneurs and provide detailed feedback.
Last year we held six Summits. Local panels of teachers picked the overwhelming majority of participating companies. So far, that's involved more than 130 products, including +30
"curriculum products" (from math and science to reading, writing and coding tools), +50 "teacher needs" products (classroom management products, collaboration tools and lesson planning support), and a gaggle of products aimed at bolstering professional development, student authoring and studying and so on. Here's the full list.
To elicit teachers' input, we created three feedback forms that educators use to share their impressions and critiques of edtech products:
A “lite” feedback form is your classic multiple choice--our version of an exit poll on products;
A “deep” feedback form asks for more qualitative impressions;
A “critical evaluation” nudges teachers to answer more detailed questions.
And boy, did we get detailed feedback! Over the past 14 months, we've gathered more than 10,000 surveys on those +130 edtech products.
We use the feedback in multiple ways: First, we return to every teacher their commentaries from the deep and critical evaluations so they have their notes. We also share those commentaries with companies so that they can make improvements. We include much of that qualitative feedback on individual product pages in our Edtech Index. And we use some of the feedback (both qualitative and quantitative) in our Product Insight reports, along with more analysis of edtech sectors and trending products.
Here’s what we’re learning based on the feedback from our six Summits.
What’s Been Featured at All Educator Days?
To secure a spot at a Tech for Schools Summit, companies submit an application that is reviewed by a panel of local educators. The chart below shows what types of products were included in our six All Educator Days in 2014, based on the main categories in our Edtech Index.
What Teachers Do at Summits
Yes, they do talk. And yes, they do listen--both to entrepreneurs and to one another.
Teachers spend as much time as they like talking to product teams. Afterward, the educators provide feedback. Our “lite” feedback forms ask teachers to rate products in several ways: on how easy the product is to set up, on its visual appeal, on how “actionable” the data it generates is, and on how much time they might save by using this product. The form also then asks teachers to provide an “overall” ranking. All these responses are based on a 1-5 scale, with 5 representing the most positive response.
Those separate scores let us ask an intriguing question: How does a product's overall score relate to those four components? For instance, would a high rating on “actionable data” vault a product into a positive overall rating? Or would a poor visual appeal rating doom it to a low overall score?
Our first hypothesis was that the ease or difficulty of setting up a product would dictate its overall score. Based on the feedback so far , this doesn’t seem to be the case: Teachers give products positive overall reviews, even when they perceive it as difficult to set up. While 70% of products that scored a 5 on ease of set up also scored a 5 on overall score, the reverse is not true for products perceived to be more difficult to set up. Just 12% of products that scored a 1 on ease of set-up also scored a 1 on overall score, and almost 30% of products that scored a 1 on ease of setup scored a 5 on overall score.
Although setup time didn’t seem to affect a teacher’s broader opinion of a product, it appears that the overall amount of time saved by using a tool does have an effect. When we compared scores on “Saves Time” to the overall scores teachers gave, we saw a correlation: 88% of products that scored a 5 on time saved also scored a 5 overall, and only 10% of products that scored a 1 on time saved earned an overall score of 5. Moreover, nearly 40% of products that scored a 1 on time saved also earned an overall score of 1.
Looking at other data for the “Visual Appeal” of a product and the “Usefulness of Data” it produces, we drew similar conclusions. So far, our data suggest that teachers value saving time and visual appeal more than how easy the product is to set up or how “useful” the data might be.
The data we have collected so far suggests that the amount of time a product saves and its visual appeal are the two factors that correlate best with a product’s overall rating. Ease of setup and the usefulness of the data output are less correlated with overall rating.
It’s still early days for edtech and these trends may change as more products are created and more teachers gain familiarity with using edtech tools in the classroom. That’s why we’re planning to host even more Summits in 2015. As the edtech landscape continues to take shape, we’ll be on the ground to find the newest insights and emerging trends. To 2015!
 At Summits, educators complete 3 types of surveys: micro, deep, and critical evaluations. Micro surveys are quantitative only and thus are not included in the reviews posted on the Index.
 Micro and long reviews ask educators these 5-point rating questions, but short surveys do not. The rating questions were asked with a different scale during our Baltimore 2014 Summit - thus these reviews are omitted from this analysis.
Stay up to date on edtech.News, research, and opportunities - sent weekly, for free.