Morgan Polikoff and Andrew McEachin: Senate’s Harkin-Enzi ESEA Plan Is A Step Sideways
Our guest authors today are Morgan Polikoff and Andrew McEachin. Morgan is Assistant Professor in the Rossier School of Education at the University of Southern California. Andrew is an Institute of Education Science postdoctoral fellow at the University of Virginia.
By now, it is painfully clear that Congress will not be revising the Elementary and Secondary Education Act (ESEA) before the November elections. And with the new ESEA waivers, who knows when the revision will happen? Congress, however, seems to have some ideas about what next-generation accountability should look like, so we thought it might be useful to examine one leading proposal and see what the likely results would be.
The proposal we refer to is the Harkin-Enzi plan, available here for review. Briefly, the plan identifies 15 percent of schools as targets of intervention, classified in three groups. First are the persistently low-achieving schools (PLAS); these are the 5 percent of schools that are the lowest performers, based on achievement level or a combination oflevel and growth. Next are the achievement gap schools (AGS); these are the 5 percent of schools with the largest achievement gaps between any two subgroups. Last are the lowest subgroup achievement schools (LSAS); these are the 5 percent of schools with the lowest achievement for any significant subgroup.
The goal of this proposal is both to reduce the number of schools that are identified as low-performing and to create a new operational definition of consistently low-performing schools. To that end, we wanted to know what kinds of schools these groups would target and how stable the classifications would be over time.
To conduct our analysis, we used statewide school-level longitudinal data from California for 2003-04 to 2010-11. As its growth measure, we assumed that California would choose growth in the Academic Performance Index (API), which is a simple school-level difference in aggregate performance (similar to, but perhaps marginally better than, merely taking this year’s proficiency rate and subtracting last year’s proficiency rate). The full analysis is available in the full paper, forthcoming at Educational Researcher. Here we highlight three of the more interesting (if not surprising) findings.
First, we found that the PLAS group was highly stable when using achievement levels (68 percent consistency from year to year in schools falling in the bottom 5 percent), and highly unstable when using some combination of levels and growth (12 percent consistency). The stability improved markedly, however, if two- or three-year rolling averages were used. For instance, a three-year rolling average of levels and growth, even based on the crude growth measure California would likely use, achieved 54 percent year-to-year consistency. We suspect the results would be better still if California used a true student-level growth measure.
Second, we found remarkable consistency in the kinds of schools identified as the bottom 5 percent for LSAS and AGS. For LSAS, a full 74 percent of schools in the bottom 5 percent were there due to the performance of students with disabilities. For AGS, every single school identified during the study period was a school with a significant Asian or White subgroup, which was higher performing, and a lower-achieving subgroup.
Third, we found that all the classifications were biased against middle schools and in favor of elementary schools (we excluded high schools from the analysis). We also found some evidence of bias against smaller schools, owing to their greater year-to-year fluctuations in achievement.
Based on our findings, we made some straightforward policy recommendations that could be included in the law with little effort.
But our main takeaway was this: policymakers must start paying attention to the relatively vast literature on accountability policy design. We continue to make the same mistakes over and over again with regard to the types of schools we are identifying for improvement (setting aside the actual improvement strategies). If accountability is to be a successful intervention, it is time to get serious about identifying the schools that really need accountability – persistently low-achieving schools that are not growing over time. For that goal, this proposal falls short.
- Morgan Polikoff and Andrew McEachin
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.