Quelling the Controversy Over Technology and Student Testing


Quelling the Controversy Over Technology and Student Testing

By Gee Kin Chou     Jul 15, 2014

Quelling the Controversy Over Technology and Student Testing

“In the aftermath of PARCC and Smarter Balanced’s recent pilots, ‘testing’ is the acrid word on most ISTE educators’ tongues when it comes to negativity around Common Core,” wrote Mary Jo Madda in her recent Edsurge post “Do Teachers Really Hate Common Core? From the Floor of ISTE 2014.” Her article seemed to reflect a popular storyline that testing gets in the way of teaching and learning.

But is this really what educators, students and parents believe? Delving deeper into the many dimensions of testing can reveal that the majority may hold a more nuanced view.

Coincidentally, a few days before ISTE 2014, more than 900 policymakers, educators, test developers, academics and other assessment professionals gathered in New Orleans for the National Conference on Student Assessment 2014 (NCSA 2014), organized by the Council of Chief State School Officers to share their research and thoughts on how assessments can accelerate student learning, rather than slow it down.

Even most of the attendees would have agreed there is too much testing, but they are dedicated to finding solutions that can pinpoint a student’s achievement against standards with the minimum of disruption to teaching and learning.

No Child Left Behind: The big culprit

The common view at NCSA was that things started going downhill with No Child Left Behind (NCLB). Any mention of testing became synonymous with the annual “high stakes” tests that states were federally mandated to administer, and avoiding the consequences of missing the annual targets of NCLB became an obsession for many state and district administrators. “Race to the bottom” (describing how states lowered standards to allow more students to pass the tests) and “teaching to the test” were terms that reflected the general cynicism towards this unhealthy environment. School districts reduced class time for non-tested subjects to push reading and math. The final straw was the move in some states to link teacher pay to test results.

Common Core testing: a civil rights issue

John White, Louisiana’s State Superintendent of Education, laid out the rationale for Common Core testing for the audience at the opening luncheon: “Comparability of test scores is a civil rights issue for all the kids in Louisiana.” (He really meant for all kids in the US.) He cited an example of a girl in New Orleans who had been led to believe she was doing well in school with grades that made her the high school class valedictorian. Yet when she took the ACT for college entrance, she scored a 11--which put her behind 99 percent of other test takers in the US. White’s message was self-evident: This girl should have known where she stood against nationally accepted standards throughout her years in school, not only at the end when it was too late.

Strip away the politics opposing the “nationalization” of education, and shelve for now whether or how teacher compensation should be linked to performance, and it is hard for anyone to argue against keeping students, teachers and parents continually informed about the progress of each student towards attaining the skills necessary to do well in life.

A chance to get it right this time round: “Next Generation Assessments”

While acknowledging the suspicions that Common Core will turn into NCLB v 2.0, attendees at NCSA 2014 spoke of a “balanced assessment system” comprised of frequent formative assessments to inform teaching, a few interim assessments throughout the year to check alignment with standards, and the annual summative assessment for accountability.

Recognizing that this could add up to a lot of testing, efforts have been underway to develop “Next Generation Assessments” to make testing an activity that adds value to the teaching and learning process as well as assess student achievement against standards, and that gradually blur the boundaries between formative and summative assessments. Working under a cloud of constant scrutiny, test developers have been researching, creating and field testing innovative assessments that both students and teachers may eventually welcome into the classroom.

Technology could be the gamechanger, but it is still early days

Almost the entirety of the media’s reports on the role of technology in the PARCC and SBAC pilot tests completed this past spring has been simplified to “students took the tests on computers instead of filling in paper bubble sheets with a pencil.” Some articles have noted that the question the computer presented a student depended on her response to the prior question (“computer adaptive testing”). Still, most people have imagined kids sitting at computers clicking on radio buttons, or dragging and dropping objects, to select an answer from several choices. The reality did include some constructed response items, but not much more. The power of online testing has barely been touched.

Test developers are experimenting with items that are more like simulation game modules that challenge the student through a series of tasks, each of which depends on the student’s previous decisions. The record of a student’s time-on-task, keystrokes and mouse-clicks collected by interactive e-books, adaptive instructional software, and educational games provides a multitude of data for educators to track a student’s learning progress, and offers the potential to blend instruction with both formative and summative assessments into one continuous process that engages the student.

In the meantime, the need for “assessment literacy”

Lamenting that most Americans know very little about educational testing, something which not only has a huge impact on so many lives but also elicits strong emotions, the National Assessment Governing Board (NAGB, the organization that administers the NAEP assessment) has started an initiative to raise general knowledge of the function and limitations of testing. NAGB seeks to help people understand the difference between summative and formative tests, recognize that test results should only be used for their intended purpose--that not all tests can be assumed to be good enough to be used--and exercise caution in inferring too much accuracy into the numbers.

Achieving these objectives would seem to be a good way to quell some of the current controversy. The cautious hope among many NCSA 2014 attendees is that technology eventually will enable assessments to be fully integrated into instruction, and students will neither know nor care whether the activity is being “graded”; they will just be learning.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up