DOING YOUR JOB, AND JUST AS WELL: Here's perhaps more fuel for that "machines are taking over" bonfire: automated essay scoring engines grade just as well as humans. This, according to results from a Hewlett Foundation-sponsored study that looked at how nine such machines analyzed 22,029 essays across eight prompts. (The foundation is also sponsoring an automated essay grading competition, to be concluded at the end of this month.)
Before you judge or pick a side, be sure to check out this incisive piece from Justin Reich that sheds light on the ways that automated essay scoring machines assess texts. Key point: "it's not trying to 'read' the essay. It's trying to compare the essay to other essays which have been scored by humans" and predict how a person would have scored the essay. (Teaser: "due to technical limitation...the AES programs didn't know where kids put paragraph breaks in their essays.") And in case you had to ask, Audrey Watters has definitely chosen a side. Tom Vander Ark appears to be on the other.
These machines attempt to replicate a grade that humans might give--but they use a logic that would make no sense to any teacher. Not much weight is apparently given to context and content. (A pretty absurd example here.) Obviously, the emphasis is on grading large numbers of essays. But none of these machines will help teach you to write a good kicker.