Living in Dialogue: Real Crisis in Education: “Reformers” Refuse to Learn
The Education Post is a school reform site that claims to seek a “better conversation, better education.” It supposedly wants to elevate the voices of teachers and others as an antidote to the “politicized debate that pushes people to the extremes.” However, many or most contributors to their blog repeatedly question the motives of persons who disagree with them.
I’m a teacher who seems to never learn. I keep trying to engage reformers in a constructive dialogue. I submitted a guest post and it prompted two rebuttals. I hoped to post a response to a response but, ultimately, it was rejected. This is the reply that the Education Post did not see fit to print.
On a recent episode of NPR’s Planet Money, the hosts told the joke about three econometricians who went deer hunting. The first fires and misses three feet to the right of the deer. The second shoots and misses three feet to the left. The third jumps up and down and cheers, “We got it!”
As NPR explained, economists aim to be within the margin of error, and that is fine when developing statistical models for studying social policy theory and economic trends. But, in a dubious effort to assess so-called performance “results” or student “outputs,” teachers across the nation are being evaluated with value-added estimates that are not reliable or valid for holding individuals accountable.
Educator Jessica Waters, in “Results Matter More than Practice” replied to my Education Post contribution with the claim that teachers should be evaluated by student “outputs.” She gave no evidence that this is possible or desirable. She seems to assume that it is possible to use test score growth to guess-timate the amount of students’ learning that can be linked to each teacher in every school across this diverse nation.
I wonder if Waters has read any of either the economists’ regression studies that were used to promote evaluations that supposedly estimate the “results” of teachers’ practice, orthe social science which explains why those algorithms aren’t appropriate for evaluating individuals. She made no effort to address the likely scenario that this misuse of test scores will prompt even more destructive teach-to-the-test rote instruction and increase the exodus of teaching talent from schools where it is harder to raise test scores.
In “A Teacher Proposes a Different Framework for Accountability,” I argued “federal interventions should favor ‘win-win’ experiments, not innovations that will inevitably hurt some children in an effort to help others.” I thought that was pretty clear. After all, it is hard to imagine an inner city high school teacher who hasn’t witnessed the harm done to poor children of color by the test, sort, and punish policies of the last 15 years. Even an educator who believes that test-driven accountability produced benefits in her own school must have also witnessed the damage that is widely imposed on so many children in a hasty experiment to help others.
Waters seemed to misunderstand my argument, however. She replied, “I don’t believe there is such a thing as a win-win experiment. I have never seen one as a science teacher or in my many practicum hours as an aspiring principal.”
So, I will further explain. In a sincere effort to improve schools, the federal government has encouraged experiments like value-added teacher evaluations, the Race to the Top (RttT), and School Improvement Grants (SIG), without adequately considering the unintended harm that those risky innovations would inevitably produce. Having helped plan for an RttT application and the SIG, and having taught in an SIG school, the unfortunate byproducts of those programs’ disincentives seem predictable and inevitable.
Interestingly, another Education Post rebuttal to my post prompted a comment by a supporter of value-added evaluations which is consistent with many of my positions. Peter Cook criticizes the “negative externalities” attached to the Tennessee RttT. Cook concludes:
Districts are finding it harder to recruit teachers to work in their highest-need placements because teachers fear that their evaluation scores will fall if they take a job in a low-performing school. Ironically, a policy that was intended to ensure that every student has an effective teacher is driving high-quality teachers away from the students who need them most.
Waters cited a single elementary school as anecdotal evidence for the SIG. She also ignored SIG’s costly failures, where up to a third of schools saw a decline in student performance despite the infusion of up to $6 million per school over three years. She also ignored the way that the SIGencouraged cheating, thepushing out of students who made it harder to meet test score targets, and the way that teachers who did not conform to teach-to-the-test malpractice were “exited” from schools.
Maybe Waters is correct in claiming that costly gambles with the SIG and “blended learning” were beneficial in the elementary school she cited – or maybe she isn’t. I’m not questioning her integrity. But, I am challenging the integrity of the test scores on which her claims are based. By now, it’s hard to believe that an educator would assume that reports of test score increases are actual evidence of increased learning. Years of intensive study of accountability data have taught me that sometimes the metrics are somewhat accurate and meaningful. More often, they aren’t. It’s dangerous to believe that claims of test score increases are grounded in reality.
I also wonder if Waters read the social science which explained why that rushed experiment headed schools down an unnecessarily dangerous path. The SIG was supposedly inspired by Mass Insight’s The Turnaround Challenge. The SIG’s key features, however, were based on policies that the comprehensive study warned against! That outstanding analysis of school turnarounds warned against the mass dismissal of teachers who weren’t on board with test-driven methods.
Even more important, The Turnaround Challenge added to the large body of social science which explains why curriculum-driven, instruction-driven methods, such as those promoted by accountability-driven reformers, are inherently incapable of turning around the schools facing the greatest challenges. It thus explained why “aligning curricula to higher standards, improving instruction, using data effectively, [and] providing targeted extra help to students … is not enough to meet the challenges that educators –and students – face in high-poverty schools.”
The Turnaround Challenge thus called for comprehensive investments in capacity-building in high-challenge schools and warned against the mass dismissal of teachers. Moreover, reformers ignore subsequent research byMass Insight and the Ounce of Prevention Fund which explains how the normative SIG approach undermines capacity-building because “current metrics effectively eliminate the viability of early learning as a potential long-term improvement strategy.”
I believe they were mistaken, but if reformers had respected the scientific method when trying to estimate test score growth and mandate the mass dismissal of teachers in high-challenge schools, their efforts might not have degenerated into a lose-lose proposition, which damaged the teaching profession, educational values and, above all, poor students of color. They might have drawn the obvious conclusion that we should invest in high quality early education, aligned and coordinated social-emotional supports, and making education a team effort.
Instead, they have circled the wagons, further blamed teachers, and questioned the motives of educators who advocate for science-based alternatives. Waters, for instance, counters, “Thompson is insinuating that if teachers are doing all the practice well, then students’ failure to learn is not the responsibility of the educator.”
I was not insinuating anything, nor was I ducking responsibility. I was urging them to drop their demand for high stakes testing. I was trying to remind reformers of their inability to reliably estimate how much of learning, and of the failure to learn, is attributable to an individual. I was making a case for more humane and holistic policies. I was calling for a team effort – not a coercive, competition-driven effort – to improve schooling.
And, that brings me back to NPR’s Planet Money and its implicit reminder that simple economic theories aren’t necessarily enough to solve complex, real world problems in schools or elsewhere. It asked: “How many Chicago School economists does it take to change a light bulb?” “None. If the light bulb needed changing, the market would have done it already.”
What do you think? Why are these reformers so committed to punitive testing measures? Why can they not grasp the idea that punishment is not the key to school improvement?
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.