Cheating Scandal in Atlanta; or, The Ones Who Got Caught
By now you’ve probably heard about the two-year investigation that resulted in 35 Atlanta educators’ being indicted Friday in a massive cheating scandal. (If not, you can read about it here.)
What these people did was wrong, not just legally but ethically, as changing student scores on standardized tests obfuscated student deficiencies that could otherwise have been identified and remediated; in one case, a struggling school lost federal funding for such remediation because the principal fraudulently raised test scores so high that the school appeared to no longer need extra help. I feel no impulse to defend this behavior, so please do not read the rest of my post as an apologia.
HOWEVER. This wretched situation does throw “accountability” questions back under the interrogation lamp; or it should, at least.
I have been an educator for 20 years. I have also held other kinds of jobs. Measuring how well I performed in those other jobs was much more straightforward. For example, I spent two summers stuffing envelopes and sorting mail in a warehouse. Each stuffer (or sorter) was expected to complete a certain number of envelopes per shift. Quality control supervisors checked stacks of finished items at random, just to make sure no one was skipping pieces or anything. If you were too slow or too prone to error, you could be fired. Similarly, I worked much of high school part time at a large chain hardware supply. As a cashier, I was subject to nightly performance evaluations: For accuracy, the amount of money in my drawer was checked against the amount my register had recorded and, for speed and efficiency, the number of transactions I had rung was noted and compared to the other registers that had been running at the same time. Because I am awesome, my drawer tended to balance and my customer counts tended to be among the highest; thus, I kept that job until I quit.
As a conscientious employee, I liked knowing my stats. My teenage son calls me a “try hard,” and he may be a smartass but that doesn’t mean he’s wrong. I am a try hard, always have been. So I empathize with education administrators and parents who long for the warm comfort of quantitative performance data. I really, really do.
Think about this, though: As an envelope stuffer or a cashier, I had almost complete control over the results of my performance. Sure, some customers were obnoxiously slow, and might hold up my totals for various reasons. These were the days when people wrote more checks than we do now, so we would get older men who would come in, no lie, with one sweaty check folded in their shirt pockets–it seemed their wives did not trust them with the whole checkbook–and they never had a pen, and they never thought to tell you that until every one of their tiny pieces of PVC and their 30 tubes of caulk had been punched in, so after all that 10-keying of 4 billion item numbers (pre-scanner days as well) you still had to wait for the painstaking check writing process, fretting the whole time because the other registers sounded like freaking Vegas slot machines over there and you knew your Saturday customer count record was shot. (Ah, memories.) As a teacher, though, I can do whatever I like with my performance, but the results, if by results we mean student performance, are out of my control. I can’t take the tests. I can’t write the essays. I can’t do ANY of the things. How can we measure my performance by someone else’s?
The truth that is so difficult to face is that not all people will perform at the same level in any given thing, no matter how much training they receive. Some people will always be better writers, better readers, better mathematicians, better painters, dancers, carpenters, pole vaulters, teachers, etc. Education can’t be expected to bring everyone to a pre-determined level of prowess, but that is precisely what we have begun to expect, and it’s a recipe for failure.
What, then, can/should we expect? And how to measure it? I don’t know. I will point out, though, that in every school I’ve spent time in, either as a student or an instructor–EVERY school, without exception–everyone knew who the best teachers were and who the worst teachers were. Personal differences account for some anomalous reporting, of course, but overall…everyone knows. How do we know? Maybe we could try to figure out what calculus drives that knowledge and find a way to formalize it. Moreover, I’ve learned as a runner that quantitative data lose their power to motivate unless you find ways to personalize. I am not a fast runner; my 5K personal best is 25:02, which is not bad for an old lady but isn’t going to win any races, and I’m pretty sure that’s as good as it’s ever going to get with me, so what else do I have to shoot for? Well, runners have all matter of other ways they challenge themselves and evaluate improvements. Instead of focusing on final time, we sometimes try to run increasing kilometer speeds within a 5K or 10K, for example. What if students (and their teachers) were evaluated by how well they improved on their own scores? What if they were encouraged to improve on one skill per test, as opposed to all of them?
It’s an awful thing to contemplate, but debacles like the one in Atlanta will continue as long as we make educators fear for their jobs because they can’t work miracles, and students will suffer right along with us.