It’s one thing to set goals for student achievement. It’s entirely another to define what success looks like for blended learning programs. That very challenge, however, evolved as a prevalent theme at the November 8 - 11 2015 iNACOL symposium, which brought together 3,000 educators, edtech entrepreneurs, nonprofit representatives, and thought leaders to Orlando to discuss blended learning.
Issues around personalized frameworks and virtual schools all slipped into conversations. Yet the question of assessing “success” popped up over and over again. Among the voiced questions: Should we point to test scores as emblematic of a blended program that works? Is that limiting? What about teacher adoption of technology--if they adopt it, isn’t that inherent success on its own?
EdSurge took to the floor of iNACOL to learn the perspectives of a few of those 3,000 conference goers. Here’s what we heard.
Test Scores: Not Quite the Bottom Line
Arguments over the merits of test scores--or, more broadly, quantitative measures of knowledge or skills--made their way into many iNACOL presentations, whether the scores in question came from a PARCC exam, an ST Math activity, or even a teacher’s end-of-day exit tickets.
During Monday’s “Let’s Get Real: What Does ‘Success’ Look Like for Learning Technologies?” session, panelists hammered at the point that test scores are only one of many measurements of how well students learn with the assistance of technology.
Mahnaz Charania, Director of Research and Program Evaluation at Fulton County Schools, for instance, expressed confusion at the relevance of test scores when asked: “How much of the measures of edtech success should be based on improvements in student test scores?”
“I'm very skeptical when test scores are the only thing being shown in an edtech tool,” she responded. Charania explained that the Fulton leadership team usually comes to her Research and Program Evaluation team to qualify a tool's effectiveness. When they do, she lets them know she’s more interested in measuring soft skills as leading indicators. “Over time, the test scores will hopefully improve, too, by default,” she says.
Similarly Eian Harm, Research and Innovative Projects Coordinator for the West Ada School District in Meridian, Idaho, noted that a set of scores generated by a blended program such as ST Math or Dreambox, doesn’t necessarily correlate to external test scores.
“Students could be showing these great results in this tool, but take a test outside, and they flatline,” Harm said. “How can this show the hard work that teachers are doing?”
In fact, the issue of “causality”--tracking whether or not edtech tools are driving improved test scores--came up again and again. For instance, panelist Seth Corrigan of GlassLab asked: how in the world do we really know if a particular program caused a particular outcome?
Overhyped claims are a flag. When Charania’s district purchases edtech tools, “the question of causality comes up in my mind when I see something that says ‘This raised student test scores by X%’,” she observed.
It’s not just a math score issue, either. How do teachers know if an edtech game increases students’ grit? Or if a student uses NoRedInk and Curriculet, how able are they to authentically argue a point?
Aligning ‘Success’ to Vision
When asked by an iNACOL participant what else besides test scores should schools and districts weigh to define success, Charania responded “academic outcomes,” a term with a nuanced definition. “Academic outcomes” vary from school to school. And many visitors to iNACOL argued that a school or district’s leader sets the goal for what constitutes “success.”
“There are lots of different kinds of school models and variations out there,” said Stacey Childress, CEO at NewSchools Venture Fund, adding that she and her team are trying to identify the range of “factors of success” in blended environments.
Aylon Samouha, co-founder at the school-design nonprofit, Transcend, and former Chief Schools Officer for Rocketship Education, referenced the Achievement First Greenfield model--a collection of charters that believe one can measure “curiosity” for evidence of student growth. Measuring curiosity was a key focus of Greenfield leaders including middle school principal Robert Hawke. While Samouha concedes “there are no instruments at the moment to measure curiosity,” he remains optimistic.
“We've tried to detect curiosity in our own small ways with rubrics, observation. We will start to find ways that we measure it, at least at the practitioner level,” he says.
Samouha also expressed appreciation for a leader’s vision in keeping those measurements closer to a teacher’s practice, reflecting on the fact that test scores are often used in harmful, rather than helpful, ways. “There's a deeper question [at Greenfield] about how are we going to use measurement as a ways to build practice, rather than use them as perverse incentives.”
Just Do It Right
However leaders and educators define “success” throughout the learning process, the Highlander Institute’s Shawn Rubin worries that focusing on the end product may blind administrators to the success that teachers and students have along the way.
“You've got to get the implementation first,” he says, “and then you can be successful in the performance category--or at least track success.”
Perhaps he is right, and going forward, acknowledging the small successes in the implementation is the precursor to student success--however one might define it.
“Celebrate the success of educators who are at least making the transition and trying to go blended,” Rubin recommends.
Stay up to date on edtech.News, research, and opportunities - sent weekly, for free.