Spring means one thing to many elementary and middle schools across the country: test prep. Students across the country are gearing up to take federally mandated, state-administered exams, and teachers, administrators and parents all feel the heat.
The stakes will rise if the Trump administration is successful in shifting funding to vouchers. Schools will be in competition to attract students and the publicly funded voucher dollars that come along with them. Parents, in an effort to find the best school for their children, may latch on to state test results as a comparative measure they can easily understand.
No wonder, then, that this season is often associated with test prep, which largely consists of repeated practice tests and discussions about right and wrong answers. The problem is, an increasing body of evidence points to one conclusion: most test prep doesn’t work.
While test prep has been on the rise since state tests were mandated by federal law in 2002, reading and math scores have largely remained flat. At best, test prep can yield modest, short-term gains in scores. At worst, it can narrow the curriculum, undermine meaningful learning, and stifle student interest and motivation.
Preparation can consume weeks or even months of classroom instruction. Materials are typically decontextualized, with multiple-choice items that mimic the format of standardized tests. Teachers often refer to this approach “drill and kill,” and it bears little connection to the concept- or theme-based lessons that teachers use throughout the year.
Test prep generally doesn’t work because the design is premised on a fallacy: that the best (and only) way to improve scores is to practice the test itself. It’s an understandable misconception. For instance, if you want to become proficient at dribbling a basketball, head out to the driveway and dribble until your hands bleed, right?
Yet research supporting this notion is scant. University of Colorado researcher Derek Briggs examined paid, in-school and out-of-school SAT prep programs. Given how expensive these can cost, one would expect them to have a big impact on results. But the study found that, when correcting for student motivation and demographics, no forms of preparation had statistically significant positive effects on the verbal section of the SATs. In fact, the scores of students enrolled in special high school test-prep classes went down 10 points.
So why does simple repetition work for basketball dribbling but not for standardized academic tests? Because unlike dribbling, the skills, background knowledge and critical thinking required to perform well on those exams aresophisticated (despite the deceptively simple format of multiple-choice questions). Doing well on tests is the result of years of accumulated knowledge and cognitive abilities.
Judy Willis, a neurologist who left her medical practice to become a teacher, has written about the negative consequences of test prep: “Boredom, frustration, negativity, apathy, self-doubt, and the behavioral manifestations of these brain stressors have increased in the past decade.” Rote memorization and high-stakes testing, she notes, cause increased stress, which hinders learning. “In the stress state, the lower, reactive brain is in control. Retrievable memory is not formed, and behavioral responses are limited to involuntary fight/flight/freeze—seen in the classroom as acting out, zoning out, or dropping out.”
The best way to raise scores over the long haul is through rich, authentic learning experiences that feel relevant to students. State exams aim to understand students’ ability to apply their knowledge from one situation to the next, a process cognitive scientists call “transfer.” When students learn state standards in a way that feels interesting and challenging, students acquire knowledge faster and on a deeper level, and transfer improves. That means developing lessons that aim for depth, rather than breadth.
There’s evidence that this approach pays off. Andreas Schleicher, director of the organization that administers the PISA international benchmark math tests, said that higher-performing nations structure their math curriculum differently, teaching fewer topics but in greater depth. “Students are often good at answering the first layer of a problem in the United States,” he says, “but as soon as students have to go deeper and answer the more complex parts of a problem, they have difficulties.” The U.S. placed 31st among all industrialized nations on PISA, a ranking that has fallen for the second year in a row.
It would be naïve to dismiss the reality of high-stakes tests by imploring educators to ignore them, or suggesting that if teachers simply “teach well and love the children” the scores will take care of themselves. It’s prudent to familiarize students with the test format so they don’t get too nervous or tripped up.
But the time spent on that sort of prep should be measured in hours, not days or weeks. Some of the strongest warnings come from testing companies themselves. Mathina Calliope of NWEA, which develops the widely used MAP assessment, answers the question of whether you can prepare students for the test: “Of course; it’s called good teaching. Should you sacrifice a week (or more) of instruction to give children pointers on the process of elimination? Please, please … pretty please, don’t.”
“Drill and kill,” as practiced today, is killing time, teachers’ love of teaching, and students’ love of learning. There is little evidence of its upside. Classroom minutes are precious. Let’s spend them preparing students for the world, not the test.