Breaking Down Myths Behind Automated Essay Graders

Breaking Down Myths Behind Automated Essay Graders

AN 'A' FOR CLARITY: "When we talk about machine learning 'reading' essays, we’re already on the losing side of an argument." So laments Elijah Mayfield, founder of LightSIDE Labs (a startup for automated writing assessments), and who participated in last year's Hewlett-sponsored automated grading essay competition. He's got some beef with a recent NYTimes piece suggesting that automated essay graders "offer professors a break." To him, the suggestion of a silver bullet is a "dangerous and irresponsible" claim that's bound to "turn off a lot of people to an entire field."

His cogent response tackles basic assumptions about how machine learning operates, focusing on the differences between an evidence-based algorithm (based on samples provided by human graders) that categorizes essays into different buckets, and the actual human process of grading. Colorful analogies abound. "All of the same things that apply to [photos of] ducks, houses, and apartments," he writes, "apply to essays that deserve an A, a B, or a C." Our favorite takeaway: "more often than not, what the algorithms learn to do are reproduce the already questionable behavior of humans."

Mayfield offers a compelling warning against conflating assessments with grading. And he believes there's great--and largely unrealized--potential for machine learning to offer students instant feeedback as they write. After all, the edtech industry raves about shortening feedback loops for traditional subjects like math. So why not writing? 

Stay up to date on edtech. Sign up to have top stories delivered weekly.

Who we are

EdSurge helps schools find, select and use the right technology to support all learners.
© 2011-2016 EdSurge Inc. All rights reserved. Every student succeeds.