The Uneven Legacy of No Child Left Behind

Policy and Government

The Uneven Legacy of No Child Left Behind

By Nick Sheltrown     Jan 22, 2015

The Uneven Legacy of No Child Left Behind

Since the No Child Left Behind (NCLB) Act was passed in 2001, it has served as a catalyst for intense policy debate. The law questioned the purpose of schooling, the role of the federal government in education policy, the value of high-stakes testing, and more. It’s also become a source of disdain for many educators, to the point that its coauthor, Congressman George Miller, once reflected, “No Child Left Behind may be the most negative brand in America.”

But it’s easy to forget the impact that NCLB had in defining the first generation of data culture in schools. The law introduced terms, from AYP and average daily attendance to safe harbor and subgroup analysis, and pushed for an education system driven by quantifiable, “scientifically-based” research. As a result, school leaders quickly became interested in understanding their schools through the lens of data (particularly high-stakes tests).

Reading the original text, you can feel its emphasis on data practice. The legislation used the terms “evidence-based practices” and “scientific research” over 100 times. It provided a formal definition of “scientifically based research” as research efforts that “employ systematic, empirical methods…[that] involve rigorous data analyses…[and] use experimental or quasi-experimental designs to evaluate effects…”

The quantitative emphasis of NCLB represented a sea change in education’s data culture, and made numbers matter like never before. This is not to say that the legislation got everything right, or even that it was a beneficial public policy. What it did do--for better and worse--is dramatically change the way school systems think about data.

At its best, it elevated two important ideas in data work. First, the law codified the principle that all students can learn and by doing so, threw into sharp relief “soft bigotry of low expectations.” The performance of special education students, English-language learners, or historically disadvantaged students no longer could be masked by a school’s mean proficiency. NCLB popularized subgroup analysis to shed light on performance differences that had been historically masked by district averages. Second, it pushed schools to use evidence to improve decisions regarding student learning, through their own data as well as through gold-standard research. When making decisions, NCLB helped us move from depth of opinion to depth of evidence.

At its worst, NCLB pushed schools into data, but in unproductive ways. Its blunt accountability rules placed the focus on test scores rather than the learning that such scores represent. Its rules created revenge effects in that while the law’s intent was that all children reach proficiency, pragmatism in school districts led to triaging students and differentially targeting “bubble kids”--those students close to proficiency cut scores who represented the quickest means to increase test scores. This cynicism ultimately led to a number of cheating scandals--most notably in Atlanta Public Schools--where educators adjusted student answers on state tests. States played their own role in creating the illusion of proficiency by lowering cut scores to artificially inflate proficiency rates.

NCLB’s analytical legacy has created a fundamental tension: it made some data matter, but in doing so, engendered a number of practices that corrupt the data. Ultimately, NCLB succumbed to Campbell’s Law:

“The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”

NCLB emphasized data work, but the least productive sides of it: annual, summative measures interpreted through labyrinthine accountability rules. Its sensible intent was warped by its proficiency-based perspective on school-evaluation.

Reading the text of the NCLB twelve years later, it is striking how much the world of evidence-based practices has changed since its adoption. The language of the NCLB Act codifies a traditional form of data work, one that is based in research practices imported from academia. It is the world of experimental design, null and alternate hypotheses, significance testing, and peer-reviewed publication of results.

Moving from what we want to know to what we know is a long, calculated process. The analytics at the fore of education today--specifically learning analytics and educational data mining--moves at a speed and scale that was never possible when NCLB was drafted in 2001. Instead of focusing on stale test scores, learning analytics centers on collecting more frequent data to optimize the learning outcomes of all students through descriptive, predictive, and prescriptive analytics. The most promising application of such analytics is through personalized learning technologies that tailor student learning experiences through the dynamic analysis of fine-grain, click-stream data on student learning. Instead of casting learning measurement as an event (i.e. an annual test), personalized learning embeds assessment in the instructional experiences. In such scenarios, we are far better positioned to use data to “leave no child behind” than NCLB’s data work ever was.

How should NCLB be remembered? It certainly has many well-documented limitations as educational policy, and only time will tell how it will be judged in history. Yet, we should appreciate how this legislation elevated data work in education. NCLB created a culture of data work in schools that--though often blunt in its application---will hopefully live on in the next phase of school reform.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up