Regular EdSurge readers should be familiar with the idea that feedback from end users (e.g., teachers and students) is vital to designing usable and effective tools for the classroom--you may have read about this here orhere. We’re not convinced, however, that the strategies that have been discussed are enough to ensure continuous improvement and increased utilization.
First, many teachers and students who volunteer feedback through surveys or edtech events are early adopters. It’s not that their feedback isn’t valuable, it’s that their feedback may not be representative of the broader population of teachers and students. The latter group’s experiences with a particular edtech tool are likely to vary from the early adopters for different reasons, including generational differences, comfort using the tools and attitudes towards technology, among other reasons. As a result, some end users may find a tool to be useful and intuitive, while a different set of users in a different context may have a hard time navigating through a tool’s features or getting it to work at all. If developers want a variety of teachers and students to use their tool, then they need to go out and find those diverse users to talk to and test their tool with.
As an example, in some of our recent work we recruited 73 teachers and nine instructional coaches from six different school districts (some charter and some traditional public districts) with very different socioeconomic and demographic contexts to provide feedback on a digital curriculum. By casting a wide net and working with districts to recruit a range of teachers, we got what we wanted: Some teachers had been using the curriculum for years, while others knew very little about it, and there was a mix of experiences in between. And as we hoped, the teachers’ varying levels of experience and their diverse backgrounds meant that their feedback was wide-ranging and highlighted issues the developers of the curriculum had not considered.
Second, what teachers and students tell developers about a particular tool may not be a good representation of their experience with and use of that tool. I’m not suggesting that they are lying to developers in surveys, interviews, or meet-ups. I’m suggesting that developers could learn a lot more about the user experience by actually watching how teachers and/or students are using the tool in a real classroom.
The value of observing teachers and students at work goes beyond gaining a more “accurate” picture of utilization; it also can uncover surprising kinds of use. A developer may think that the tool that s/he has created is self-evident and even fixed in its application in a classroom, but a teacher or student may see different or additional uses. Either way, the developer of that tool would have the opportunity to learn and make changes, for example, to the design or even the marketing of the tool.
What’s the takeaway for developers? First, don’t stop your current practices of gathering feedback. But do expand the breadth of whatever feedback you’re trying to get. Reach out to different teachers, propose small pilot studies with schools that don’t already use your product and that are different from schools you’ve worked with in the past, or visit districts in a different part of your city, county, or even state. Then expand the depth of that feedback as well. Keep the surveys and whatever other means you’re currently using to elicit feedback, but then take the next step and ask if you can spend a day or two in different classrooms to see what’s actually going on; you’ll be surprised by what you see. None of this is easy, especially since it all should be ongoing, but it will give you insight into your customer and your product that you won’t get otherwise, which might give you a leg up in this busy edtech world.