By Joshua Benton
I’d like to apologize in advance for the quality of this column. I just ask that you keep in mind that I have the writing ability of a below-average 15-year-old.
But more about that later. On Saturday, thousands of nervous high school kids will line up their No. 2 pencils and take a brand-new version of the SAT. For the first time, it’ll include a writing section and require kids to write a short essay. The ACT added a new writing section this year, too.
You’d think writing teachers – used to being shunted aside in recent years while reading and math get all the hype – would be thrilled their subject is getting the extra attention.
But some of them worry that the tests could be encouraging bad writing, not good.
“The risk is that the kind of writing that does well on a test becomes the only kind of writing that gets taught,” said Richard Sterling, executive director of the National Writing Project, which trains writing teachers. “It ends up being incredibly reductive, and that does real harm to children.”
Here’s what I mean: The kind of writing that gets you a high score on most standardized tests isn’t necessarily pleasant to read. It’s overstructured, stilted and dry.
People have figured this out. For instance, the Princeton Review has analyzed the new SAT writing section to figure out how best to boost the scores of its teenage customers. Its advice: Be boring.
“The grading of these essays is going to be very superficial,” said Andy Lutz, Princeton Review’s vice president of program development. “It’s easy to coach kids how to write in a formulaic way that will score well.”
If a kid wants to do well on the test, he says, the key is to write in neat handwriting, have clear structure, and don’t take any chances. Attempts to be interesting are at least as likely to be punished as rewarded.
In the 1990s, Texas’ TAAS test faced the same criticisms. The essay topics on TAAS were set up in a way that just begged for clunky writing: a dull introductory paragraph, three dull middle paragraphs, and a dull final paragraph that sums up the dull dullness of paragraphs one through four.
In time, Texas teachers realized this and started teaching kids how to write dull instead of how to write well. After all, boring, straitjacketed prose was the easiest way to get your passing rate up.
“Teachers started teaching that one format almost exclusively,” said Liz Stevens, who heads a training program for writing teachers in central Texas. “It ended up causing our students to write in a very formulaic way.”
As a result, passing rates on the writing TAAS went up. But the performance of Texas students on national writing tests actually went down – perhaps because kids were only being taught how to write dull prose.
(To the state’s credit, the teachers I’ve talked to all say the new TAKS writing tests are much improved and give kids more room to be interesting.)
So by the very act of testing writing, do we risk making kids worse writers?
That brings us back to my middle-school mediocrity: A couple years ago, I got a tour of the headquarters of a major testing corporation. I met lots of smart people, and some of them were excited about a new computer program they were developing.
The pitch: The program could grade a 10th-grader’s essay, and do it just as well as a human could. Finally, writing could be machine-graded, just like multiple-choice geometry problems. Lots of companies are trying to do the same thing.
I was invited to give the system a try and write an essay. I’m not allowed to tell you the topic – I was sworn to secrecy – but I can tell you that I tried to write a bright, engaging essay that didn’t follow the dry structure of so many student papers. By the time I’d finished, I was pretty happy with my work.
Now, at the risk of bragging, I do get paid to write for a living. But when the computer crunched my essay, rating it on a scale of one to six, it coldly informed me I was a … three.
I was a below-average 15-year-old.
I’d written an essay that tried not to be boring, that didn’t follow the formula. And I’d gotten dinged for it.
My bruised ego can take comfort in what Andy Lutz says about the way these standardized-test essays are graded – even when it’s a human reading them, not a computer.
“These tests reward you for being straightforward and long-winded,” he says. “In the real world, it can be good to take rhetorical chances and write something different. But on these tests, following the formula gets rewarded.”
Is that the formula to create good writers?