Compliance-Based Data Collection Will Never Close Learning Gaps

column | Big Data

Compliance-Based Data Collection Will Never Close Learning Gaps

By Farhat Ahmad (Columnist)     Sep 6, 2017

Compliance-Based Data Collection Will Never Close Learning Gaps

This story is part of an EdSurge Research series about how educators are changing their practices to reach all learners.

The beginning of the semester is always a struggle. Finding a balance between building relationships with students, developing community amongst class members and administering the assessments necessary to get the data needed to design instruction isn’t easy. Especially when you’re personalizing instruction for every learner in your class.

Two weeks into the school year, I have students who are still struggling with the diagnostic test, which is designed to see where the students are in reading comprehension and writing for 11th grade literature. This particular diagnostic is meant to mimic the Georgia state milestone test they will be taking at the end of the semester. One of my students, Michael (name has been changed), blankly stares at his screen, Beats headphones on, blaring music. I call him over to the front table and we talk. I explain to him how I desperately need the data this test will provide so I can design the class around his learning needs. I ask him what it is about the test that makes him tune out, explaining that I’m not mad, but that I genuinely want to know so I can help him be more successful.

He says, “it’s the words, it’s just too much.”

We both stare at the floor for a second. I understand where he is coming from. Michael transferred to our alternative education school because the general education program he was attending didn’t provide the level of support he needed. Forcing him to repeat the same kinds of assessments that measure what he doesn’t know is setting him up to fail—but this is senior-level literature, and there isn’t much wiggle room as far as text complexity.

But maybe there can be—if I find a way to ensure the curriculum maintains a high level of rigor. I can start him out with a text closer to his reading level to regain some confidence and give him a sense of accomplishment. I can continually push him to a higher level as he progresses, and I can work him up to at least one complex article each week in the short amount of time he is my student. It’s not the best solution, but Michael is supposed to graduate in December. The alternative is to let him sit there with his headphones on and get mad at him for being “lazy and disconnected.” That will continue the cycle of failure.

For those of us tasked with overcoming massive learning gaps and bringing success to at-risk students who have known nothing else but failure, this type of compromise might be the only way to do it. But this approach requires that we collect, interpret and use data to guide each learning experience on a regular basis—and that can be daunting.

Compliance-based ‘Data Collection’ is a Turn-Off

Collecting data, making meaning of it, and using it to take action and inform instruction is already a difficult task in a whole-group setting. But when you’re personalizing instruction for every learner in your class, it can drown you.

In my first teaching job back in 2006, at Campbell High School in Smyrna, Georgia, teachers were expected to collect data at the beginning and end of each semester for every student on their rosters. Data was only collected from a formal diagnostic, which was similar to the final exam, and a final exam or state test in the classes that required one, like American Literature. We would enter the raw scores into a spreadsheet and send it to the department head at the end of the semester. No meetings, no discussions, no impact on instruction. Many of us were confused—this seemed more about compliance than helping students.

Flash forward five years. I’m at a new school, but the problem is the same. At Westlake High School in Atlanta, the administration started stepping up data collection requirements, asking for formal assessment data during our mid- and end-of-year conferences—but things still felt forced. This was one of the last years that Georgia seniors had to pass a writing test, and I was teaching a study skills class focused on writing for at-risk juniors and seniors who needed to pass it. This experience pushed me to look at data in a different way.

Every student in my class had taken and failed the graduation writing test. I had access to those reports, which showed a raw score, as well as individual scores by domain: ideas, organization, style and conventions. I created a simple Excel spreadsheet with those conventions and their initial scores. Then, I developed assignments aligned to each domain so that we could monitor progress more specifically. Additionally, I started categorizing daily work by domain as well, so it was easy to compare to that initial score, and thus adjustments could be made by student as necessary in the curriculum.

Here is what my data looked like in the early days

This is when I first started noticing trends from the data. Some students had great ideas but no organization; others were somewhat organized but struggled to generate ideas. I began meeting with each student individually, focusing on remediation that targeted their areas for growth based on the data for each domain. During whole-group instruction, we worked on essay organization, a domain where students scored low across the board. Since it was a small class with fewer than 15 students, I was able to split the time between one-on-one and whole-group work. For the first time in my academic career, I was using data to inform instruction and not just collecting it for show-and-tell—and it was making a difference.

The original writing test they failed became our baseline, and each assignment we worked on provided data points that I could use as benchmarks for progress throughout the year. Analyzing each assignment by domain was definitely more paperwork for me, but ultimately more productive for all of us.

During each conference, I would review scores with each student. But they weren’t just looking at a raw score anymore, they were looking at scores across four different domains. Even if the domains averaged to a failing grade, that didn’t seem as bad if one domain was scored as a 85%. Suddenly there were silver linings. And though many of my students were programmed to accept failure, their successes are what I’d focus on. I wasn’t building up false hope, but rather shining a light on the strengths they demonstrated.

It wasn’t until I got to McClarin Success Academy in College Park, Georgia when I really hit my stride as far as getting the maximum impact out of student data. My data was more precise and there was more of a remediation plan in place. This was due to my own growth in mastery-based grading and curriculum development.

By this time I was knee-deep in standards-based grading, and data collection was ongoing. I was using academic data from formal and informal assessments, and data from the state standardized test, to inform my practices, which then in turn were tailored for each student. To do this, I organized all results by domain and standard so I could identify exactly where students needed support.

Informally, I paid attention to test-taking behaviors. Was the student actively engaged or tuned out? Did the student leave the room? Rather than confrontation, we would have discussions. “Why are you so out of tune with this test?” “What is it about this test that is making you shut down?” Getting a useful answer sometimes proved to be difficult, but with each conversation, I was able to tweak my assignments and assessments to best suit my students’ academic needs and yield the most accurate data.

After a few semesters, clear trends emerged. Most students struggled with the indirect meaning standard, which includes satire and figurative language. An overwhelming 54 out of 55 students failed that standard on the diagnostic. Given that information I developed an entire unit on the topic. The one student who did pass that standard skipped out of indirect meaning coursework entirely, and focused on other learning needs.

Refining my data collection strategy was hard work, but in the long run it was saving us time; rather than assigning the whole curriculum to the class, students would only be assigned curriculum from standards they hadn’t already mastered.

Diamonds in the Rough

Every once in awhile I would have students who would test way above the group in standards that normally trended at a failing score. One student, for example, scored above 90 percent on the indirect meaning standard when the average was well below 65 percent. Another student tested at a college level in reading, but had failed almost all of her language arts classes before she got to McClarin. Before administering her diagnostic test, I explained that I would use the results to design her entire class, but it wasn’t an overall score I was looking at; it was each individual standard. I told her I noticed her reading score, and I knew what she was capable of but we needed some more proof, and then we could do some really interesting things in class. This really resonated and she absolutely destroyed that test.

During our conference, we decided she was going to do an independent deep dive of a few different Octavia Butler short stories, and she was going to make a OneNote notebook collecting information and theories, connecting the literature to different feminist themes. Over the course of the semester, she consulted with college professors and other members of the community for their thoughts as to what defines modern feminism.

One semester ago this student was skipping class every day, and now she was sitting in a back room getting stuff done, and was genuinely excited about it.

At the end of the semester she was the first student in the school who had developed a digital portfolio, and she presented it to her peers and the academic staff. This was the type of thing you didn’t see at my school—it was above and beyond expectations.

The Hard Truth About Data Collection

There isn’t always a lot of institutional support with data collection, and that is a shame because it is really hard for teachers to do this work alone. Getting additional planning time allotted for data entry and interpretation, setting up peer groups to collaborate on data strategy, and having access to data experts can really make a difference. But that kind of support is rare.

On top of that, many gradebook programs have dashboards that aren’t optimized for mastery-based assessment. It’s possible to work around this issue, but it takes creativity, technical know-how, and of course—time.

Working with at-risk students comes with a unique set of challenges, one of the greatest being the massive learning gaps they bring through the door. Strong data practices can help teachers close these gaps—but that work is grueling, ongoing, and not immediately rewarding. It’s a long game, but a mastery-based curriculum driven by consistent data aggregation and interpretation might be the best way to close those gaps in the shortest amount of time. And for a 18-year-old student who is reading and writing at a third-grade level, time is critical.

Farhat Ahmad is a 9th and 10th grade World, Multi, and American Literature teacher at McClarin Success Academy in College Park, Georgia.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

Next Up

Stories of Change: Educators Shift Practices to Reach All Learners

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up