Living in Dialogue: Is Educational Research Validating Corrupt Practices in New Orleans?
In the aftermath of the Trump election, and as our “post fact” politics becomes even worse, it will be interesting to see what school reformers do. Will they double down on their truth-challenged spin on charter schools, or will they become more circumspect in using evidence?
The latest report by the Education Research Association on New Orleans school reform, “Extreme Measures: When & How School Closures & Charter Takeovers Benefit Students” stresses the benefits of charter takeovers of failing schools. This is the third in a series of posts on the way that those so-called student performance gains have been exaggerated. The first two posts are here and here.
The technical appendix to the Education Research Association report of school closures and takeovers in New Orleans states its “main theory” that “the effects of these interventions are driven by the changes in school quality experienced by students.” The ERA’s other theory, I would add, is that quantitative analyses that mostly rely on test scores can assess changes in the quality of actual schools, full of flesh-and-blood educators and students, often without qualitative research into what actually is happening in those buildings.
My big complaint with the ERA’s work is that this assumption is especially dubious in a district with an unflinching focus on rising test scores. As the ERA acknowledges, “Research on the authorization decisions of the RSD suggests that the decisions are based almost entirely on test scores.” I suspect it underestimates the way that such a policy can corrupt so many aspects of schooling.
The ERA acknowledges, “Based on theory and prior evidence, we expect the effects to be dynamic, starting with an initial disruption around the time of announcement and followed by null or positive effects as students settle into new schools.” After all, it is implausible that the quality of education and the amount of learning would dramatically increase between the time just before and just after the announcement of a school’s closure.
But, the ERA study of elementary schools reaches two findings that seem irreconcilable. First, as previous research documents, it adds to the evidence that disruption is the enemy of learning, and that less disruptive methods of school improvement are needed. Second, during the disruptive transition, and before the interventions begin, test scores increase dramatically! They actually increase at the same rate as during the two years after the interventions!
The ERA could argue that during the takeover the comparison schools and the intervention schools both saw improved scores before and after the closure announcement. This means that the rate at which intervention schools increased test scores after the takeover was greater in comparison with the rest of NOLA. But, that doesn’t address the fundamental point – it remains unclear whether the rate of increases in scores posted before and after the intervention (and that are virtually identical) are equally or unequally real in that they are due to improved schooling.
During this period where adults would be tempted to push out students who make it harder to raise test scores, it would seem to be necessary to document the demographics and outcomes of students at the beginning and the end of the year. After all, the ERA notes, “Attrition is one of the main threats to validity in any longitudinal analysis.” So, one would expect the raw numbers of the students in each of the schools under review would be reported at the beginning and the end of each school year.
When regression analyses indicate that it is the higher-challenge students (who school reformers say they want to help) who are being placed at a great risk, I would hope researchers would go into schools and investigate what is actually happening. So, even when the results in Table 4C are presented in such a funky language, I hope the ERA will follow up on the findings which it says are “consistent with the theory that negative effects arise for less committed students if they experience significant disruption.”
The ERA should seek a balanced appraisal of the real world outcomes such as “The effects start large and negative for 9th graders and then converge to around zero.” Moreover, if it seeks to determine whether these disruptive and potential destructive interventions produce gains that are real, the ERA should come to grips with the future implications of its finding “Table 5C shows that the same students experiencing large improvements in school value-added to student test scores experienced no effects on high school graduation or college entry.”
Finally, the panel discussion which followed the ERA presentation noted the pattern where different providers in various parts of NOLA are likely to propose different types of schools. Even Neerav Kingsland, the former CEO of New Schools for New Orleans, now admits, “A bunch of teaching to the test just jacks up crystallized knowledge but doesn’t really give kids the human capital qualities they need to succeed in the workforce.” Does that mean that NOLA leaders would accept a separate and unequal future where more privileged students receive respectful, holistic and engaging instruction, while the poorer students receive behavioristic No Excuses pedagogies?
What do you think? Should this evidence in the technical appendix have been included in the body of the report? Will the ERA address these issues in future research?
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.