Cheating, Honestly
Whatever one thinks of the heavy reliance on standardized tests in U.S. public education, one of the things on which there is wide agreement is that cheating must be prevented, and investigated when there’s evidence it might have occurred.
For anyone familiar with test-based accountability, recent cheating scandals in Atlanta, Washington, D.C.,Philadelphia and elsewhere are unlikely to have been surprising. There has always been cheating, and it can take many forms, ranging from explicit answer-changing to subtle coaching on test day. One cannot say with any certainty how widespread cheating is, but there is every reason to believe that high-stakes testing increases the likelihood that it will happen. The first step toward addressing that problem is to recognize it.
A district, state or nation that is unable or unwilling to acknowledge the possibility of cheating, do everything possible to prevent it, and face up to it when evidence suggests it has occurred, is ill-equipped to rely on test-based accountability policies.
There are disturbing indications of highly uneven quality in test security. A recent Atlanta Journal Constitution review of state policies found that, in many places, the security of tests is at least moderately, and perhaps severely compromised. For instance, roughly half of the 46 states that responded to the AJC’s survey reported that they didn’t analyze answer sheets for evidence of improprieties. In virtually all of them, teachers are asked to proctor their own students’ exams. A 2009 report from the General Accountability Office reached similar conclusions.
One might argue that high-stakes testing carries only weak incentives for prevention and particularly for investigation. Cheating can only improve results, and that’s not an outcome anyone is eager to question, especially given that test scores can make or break jobs, institutions and reputations. On the flip side of that equation, states and districts know that owning up to significant cheating having occurred on their watch will inevitably result in a political firestorm of the first order.
These efforts are also expensive. Several states and districts, including California and New York City, have cut back on monitoring due to budget cuts.
Another big, related problem when it comes to cheating investigations is that it is extraordinarily difficult to prove. Erasure analyses – when answer sheets are checked for erasure marks, and unusually high wrong-to-right corrections are flagged – cannot actually serve as conclusive evidence that cheating occurred. That usually requires a case-by-case investigation, including confessions from the people involved. In other words, it means thorough, difficult investigation, such as the one in Atlanta.
In contrast, there’s D.C. Public Schools (DCPS), where a 2011 USA Today analysis of answer sheets from the late 2000s found implausibly high “right-to-wrong” erasure rates in a few dozen schools (this was actually the second analysis reaching this conclusion; the first was in 2008, and was commissioned by DCPS’ parent agency).
The district’s response was to commission a couple of rather anemic investigations. For example, most recently, aprobe conducted by the D.C. Office of the Inspector General (OIG) led the district to conclude that there was no “widespread cheating,” even though the OIG’s on-the-ground investigation was basically limited to a single school, in which it did actually find evidence of cheating.
Not acknowledging such “widespread cheating” seems to be DCPS’ priority, in part because doing would leave their entire reform agenda open to serious fire.*
One obvious problem is that this is not a binary outcome: There’s a lot of space between no cheating and “widespread cheating.”
But, in this case, whether cheating was rare, moderate or “widespread” is in many respects less significant than the unmistakable impression that DCPS seems unwilling to own up to the reality of the policies they have embraced.
Remember – during the years in question (and still today), DCPS’ test results were national news. The pressure to boost scores was enormous. Jobs were on the line. Even strong supporters of the DC reforms must acknowledge that these conditions increase the likelihood of wrongdoing. Among officials implementing these policies, such acknowledgment is a responsibility.
In education, we hear a lot about “bold leadership.” The USA Today story was an opportunity for such leadership. District officials should have looked the public in the eye and stated clearly that heavy reliance on high-stakes testing has unintended consequences, and that, while they don’t believe cheating was rampant, it is clear that it occurred.
This response would have generated a tremendous wave of criticism (some of it unfair), and there’s a decent chance that a full-blown, multi-school investigation would have failed to produce a whole lot of conclusive evidence. But running for the barricades also brought serious criticism, and a more forthright approach would have at least demonstrated that school officials are realistic about the serious risks of the path they’ve chosen. And it might have cleared the way for other school leaders to do the same thing.**
Listen, it’s already very difficult to “trust” the results of state tests. There are any number of perfectly legal, albeit still harmful, ways to manipulate test scores, ranging from “teaching to the test” to concentrating efforts on students close to proficiency cutoffs (on a related note, see this truly remarkable 2009 story about one “strategy” used by DCPS).
Yet, for better or worse, we’re putting more and more faith in these tests, and so the absolute bare minimum we can do is to do everything possible to take precautions against outright cheating and investigate it when there’s even tentative indication that it might have occurred. That’s one test-based incentive everyone can support.
- Matt Di Carlo
*****
* The 2007-2009 changes in cross-sectional proficiency rates that the district touts so frequently, and which they are defending against cheating allegations, are not even valid policy evidence (for many reasons, including, most basically, the fact that they occurred prior to the major DC reforms).
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.