The New York City-based Robin Hood Foundation has recently announced a $5 million "College Success" prize for anyone who can “spur the development of an innovative, scalable and technology-enabled tool to improve the academic performance of underprepared college students.”
Lest you think that requirement is too broad and vague, here’s the breakdown:
$500K at the conclusion of the 2015-2016 school year to any team or teams that “increase year-over-year full-time persistence relative to the control group by a difference of 10 percentage points;”
$1M at the conclusion of the 2016-2017 school year to any team or teams that “achieve a 5 percentage point increase in the number of students who complete their associate degree within 2 academic years relative to the control group;”
$3.5M at the conclusion of the 2017-2018 school year to any team or teams that “raise three year completion rates 15 percentage points relative to the control group.” And if no team makes that grade, this grand prize money will go unspent.
Broad? Yes. Vague? Hardly.
First there’s the application. In August 2014, up to twenty semi-finalists will be announced based upon expected impact, cost-effectiveness, scalability, measurability, strength of team, and appeal for community college students. These semi-finalists will then be invited to a “Showcase Day” where 11 judges will select three finalists based upon expected impact, functional demonstration, and cost-effectiveness.
Then there’s the process. “Relative to the control group” refers to the 7th-year doctoral student’s worst nightmare: randomized, controlled experiments (and those pesky p-values). In contrast to the research-optional lean startup approach which encourages rapid-cycle prototyping for efficacy, the Robin Hood folks are facilitating a three-year randomized controlled trial for the three finalist entries with “first-time, full-time freshmen with one or more remedial needs at the City University of New York (CUNY) from 2015-2018.” Even more concrete:
“The sample size for each intervention is approximately 500 students, so that the total sample is roughly 2000 students, including a control group. As such, at the time of the finalist evaluation, each finalist intervention must be able to scale their technology to at least 500 students for the duration of the three years (while staying under marginal cost restrictions for each student).”
That’s not to say there won’t be plenty of lean startup along the way. According to the official rules, teams can collect usage data as long as it’s FERPA-compliant and used for “making improvements to the Team’s technological intervention or solution.” Additionally, during the first two rounds, the design consultancy, ideas42, will be providing “resources and advice to contestants on how to further develop and improve their tech-based interventions with behavioral science.”
And finally there’s the money. Robin Hood's $5 million prize lands smack between the kind of funding that startups get from the venture community and the more expansive grants available from the federal government. For instance, new companies who matriculate through Palo Alto-accelerator, ImagineK12--arguably the most visible edtech incubator--can receive up to $100K before launching a product. On the opposite end of the spectrum, the federal Department of Education’s Office of Postsecondary Education has earmarked $150M for up to 127 awards in Higher Education--roughly $1.1M per award.
And if the delayed prospect of $5M isn’t enough, semi-finalists can expect $40K for making the cut, and another $60K if they are named finalists.
It's a great parlor game to guess who will be the top competitors for this prize.
There’s an argument to be made that the competition’s structure is too much for the "fail early, fail often" mentality of the startup scene. After all, what small to mid-size startup has the resources to take on a potentially three-year commitment?
Even so, large companies and foundations don't necessarily have a leg up on their smaller, nimbler counterparts. With Robin Hood's commitment to measurable results, they'll need to show up or shut up.
And both approaches have seen many, many millions of dollars and countless hours lost on less noble endeavors.
At the core of the Robin Hood Prize is a strong opinion around how education technology innovations should be encouraged and measured. Whether or not it is the correct approach is a moot point. There should be room for accelerators, incubators, big R&D, Teacher Tanks, public-private partnerships, and yes --three-year randomized controlled trials involving 2000 community college students.
Which of these approaches will be remembered as the "silver bullet?" Perhaps none. Maybe all? But there’s a perplexing bit of comfort knowing that the Robin Hood Prize will reward real results for real people or none at all.