Are great teaching and assessment fundamentally at odds? One might think so because, unfortunately, the words “test” and “assessment” are often used interchangeably. When we think about testing, we sometimes visualize sharpened pencils and Scantron machines. Worse yet are expectations that teachers teach to the test—cogs in a feedback loop that is anemic and detached from the purpose of teaching and learning.
What often gets lost in the mix is that while teachers loathe testing, they love assessment. And there’s a clear difference. Great teachers are keen observers of student behavior, and rabid, systematic consumers of the data that observation generates. They ask students to raise hands to share what they know. They use exit tickets to test for content mastery. They watch and listen for collaboration and communication skills as students work in small groups. Great teachers use data to group, and regroup, students to deliver and tackle meaningful challenges that they will remember for years to come. Done well, assessment is ongoing, constant and invisible.
Ten years ago, invisible assessment was a concept long on potential but limited by the realities of the classroom. Although observation-based assessments have been used for decades to track the progress of young students, the process was cumbersome, highly variable, and hard to scale. If upper-grade teachers were lucky, they had access to limited benchmark data. But little by little, invisible assessment is becoming a reality. Here’s why:
Early childhood educators
now report that over 90 percent of them have access to technology in the classroom and use it effectively, with 88 percent using it at least once a week. The explosion of mobile computing in classrooms is paving the way for invisible assessment in two major ways.
Gone are the days of scribbling observations on the notepads or tabulating data on nights and weekends. Today, observational assessment platforms are in the cloud, device-agnostic tools modeled on the very best of the consumer web. They enable early educators to securely record student work in real time and create dynamic portfolios of student progress and machine-readable data.
Mobile computing also means more students have access to digital resources that not only engage and motivate—but also provide teachers with unprecedented insights into student learning patterns and preferences. Like the rest of us, students are consuming content online. Tools like Newsela can help teachers better understand variations in reading levels and address their interests and challenges. Others like Formative or MasteryConnect replace spreadsheets and notepads to help teachers create assignments in real-time and make assessments happen naturally, and daily, without friction.
The transition from collecting data on paper and pencil to digital methods has been a boon for early childhood researchers who now have access to longitudinal data. In aggregate, the evidence has allowed the field not only to identify the areas of development and learning that are most critical to future school success, but also to ensure that the tools being used are valid and reliable, and that teachers use them with a high degree of inter-rater reliability, eliminating variability concerns.
With better data, researchers have also been able to reduce the number of assessment items that are required to develop predictive insights. Getting better insights with fewer questions—or formal tests—should be a boon for students and teachers alike. This reduces the burden on the teacher during formal assessment reporting periods while still providing administrators the data they need for larger-scale, programmatic decision making.
Politicians are beginning to realize what great teachers have always known: that assessment, when done well, can inform teaching and learning without taking away from instructional time. That a uniform approach to assessment is bound to miss opportunities in a world where a multiplicity of strategies and tools collect information in real-time. That testing for reporting and assessment for meaning don’t have to be mutually exclusive.
The new federal K-12 education law promised to usher in an era of increased flexibility, returning control to states and districts that can (hopefully) ensure assessments are worth taking and generate useful insights to inform practice and improve outcomes. The law reflects an understanding that teachers and districts have access to a multiplicity of tools to gather data about student performance.
Rather than layer new—or entirely uniform—tests on teachers and schools, the new law creates a pathway for more localized, teacher-led selection of assessments. It opens the door for invisible assessment that draws upon the data we have, rather than imposes new mandates we don’t need.