Developmental Writing Tip: Attack of the Robot Graders!

Michael Winerip’s recent New York Times piece entitled “Facing a Robo-Grader? Just Keep Obfuscating Mellifluously” basically just confirms everything we already knew about computer grading. Nothing to see here. Keep moving.

Still reading? Oh. Fine then. The article is about the new “e-rater” by the Educational Testing Service (ETS), which can grade 16,000 essays in 20 seconds. Winerip pits the ETS e-rater against MIT writing director, Les Perelman, who systematically disembowels the robot grader and its proponents. Two days later, Winerip appeared on NPR’s All Things Considered to discuss his article. According to him:

The automated systems look for a number of things in order to grade, or rate, an essay, Winerip says. Among them are sentence structure, syntax, word usage and subject-verb agreements. “[It’s] a lot of the same things a human editor or reader would look for,” he says.

Enh! Wrong.

In my grading rubric (and the grading rubric of most contemporary composition instructors), sentence level and grammatical issues are pretty much at the bottom of the list of priorities. Of course, if a paper is riddled with these “local errors” to the point that it effects the paper’s coherence, that’s clearly an issue that needs to be dealt with. Incidentally, these are the students who need a real teacher the most. So, pawning these students off on machines would be one of the most irresponsible educational decisions imaginable. Thus, the e-rater fails the test for evaluating students with real developmental writing needs.

But what about students who are decent writers looking to polish their essays? Well, that’s not going to work either. Papers with only a moderate number of local errors can (and in my opinion, should) be evaluated differently. Personally, I care much more about my students’ ideas and overall structure. Like whether they craft strong thesis statements, transition smoothly, and support their claims with appropriate and well-documented evidence. Let’s see how the computer grader weighs in on these issues.

“You could say the War of 1812 started in 1925,” Winerip says. “There are all kinds of things you could say that have little or nothing to do in reality that could receive a high score.”

Oh, really. Interesting.

As NPR’s Melissa Block points out, “they truly don’t understand what they’re reading.” Because of this, a computer will never be able to effectively evaluate a sophisticated and multi-faceted piece of writing. In order to fully appreciate a student’s writing process, one must be capable of understanding what one is reading.

Aside from that, how can a student be expected to improve as a writer and a thinker without ever receiving any constructive feedback beyond how many commas he misused? What that does is teach a very formulaic, cookie-cutter style. A style free from “error.” A style with no room for creativity or personal voice. Think for a minute about the great Americans who have made this country what it is. Inventors, Artists, Writers, Entrepreneurs, Thinkers. How many of them would you say colored within the lines? How many of them would you say believed creativity was an unnecessary nuisance that should be stifled whenever possible?

Why Robot Grading Doesn’t Work

Grading is necessarily a dynamic process. It is interactive. Improving student writing requires that teachers enter into conversation with their students. If we really want to make students better writers, we have to talk to them about how to get better, whether that is through comments or in conferences. We can’t just assign a grade and wash our hands. Anyone who disagrees with this either doesn’t care about improving student writing, or is lying to himself.

Personally, I believe the proponents of computer-grading predominately fall into the latter category. Many are administrators who want to streamline the process and make it more efficient and of course, cheaper. Others are teachers who are burned out and tired of grading, which I totally understand. I really do. The efficacy of effectively evaluating 100 essays in a week is equally as questionable. I would love to turn my grading over to computers and have that most difficult weight of teaching lifted from my shoulders. But, I know it would be irresponsible and it would be a disaster to student improvement, so I won’t do it.

I know everyone is up in arms right now about these recent developments in computer-grading, but it just isn’t even something we should be discussing at this point. In the NPR interview, Block astutely points out that she isn’t “sure [she] can see a value beyond speed.” Exactly. Machine grading has absolutely no value beyond speed. None. Surely, this isn’t what it has come to.

Move on. Nothing to see here.




Leave a Reply

Your email address will not be published. Required fields are marked *