To Understand the Impact of Teacher-Focused Reform- Pay Attention to Teachers
You don’ t need to be a policy analyst to know that huge changes in education are happening at the state- and local-levels right now – teacher performance pay, the restriction of teachers’ collective bargaining rights, the incorporation of heavily-weighted growth model estimates in teacher evaluations, the elimination of tenure, etc. Like many, I am concerned about the possible consequences of some of these new policies (particularly about their details), as well as about the apparent lack of serious efforts to monitor them.
Our “traditional” gauge of “what works” – cross-sectional test score gains – is totally inadequate, even under ideal circumstances. Even assuming high quality tests that are closely aligned to what has been taught, raw test scores alone cannot account for changes in the student population over time and are subject to measurement error. There is also no way to know whether fluctuations in test scores (even fluctuations that are real) are the result of any particular policy (or lack thereof).
Needless to say, test scores can (and will) play some role, but I for one would like to see more states and districts commissioning reputable, independent researchers to perform thorough, longitudinal analyses of their assessment data (which would at least mitigate the measurement issues). Even so, there is really no way to know how these new, high-stakes test-based policies will influence the validity of testing data, and, as I have argued elsewhere, we should not expect large, immediate testing gains even if policies are working well. If we rely on these data as our only yardstick of how various policies are working, we will be getting a picture that is critically incomplete and potentially biased.
What are the options? Well, we can’t solve all the measurement and causality issues mentioned above, but insofar as the policy changes are focused on teacher quality, it makes sense to evaluate them in part by looking at teacher behavior and characteristics, particularly in those states with new legislation. Here’s a few suggestions.
The first is teacher attrition (and retention). When teachers leave the profession voluntarily – for reasons other than retirement – that is an immediate, powerful signal that they may be unsatisfied with the conditions of employment. Obviously, some attrition is normal and even beneficial, especially if teachers who leave tend to be less effective. If, however, there is an unusually large uptick in attrition rates within those few states, such as Florida and Indiana, that have severely restricted teachers’ bargaining rights and imposed new test-based accountability measures, that would be a strong warning sign, worthy of further examination. And, for those who would argue that the teachers leaving are “bad teachers” who “fear accountability” (see here for some serious work on accountability and mobility), attrition data could be examined by teacher “effectiveness” as well.
The same goes for teacher mobility (i.e., changing schools/districts) measures – if, over the next few years, large swaths of teachers are leaving these heavily-reformed states for jobs in other states, this is also worth noting. It would be a reasonable (though probably not testable) inference that the policy changes are contributing to this trend. Changes in mobility might also be meaningful within these states and districts. For example, if we find that experienced, highly-qualified educators are becoming even more concentrated in the more affluent schools and districts, this would also serve as a warning that the policy changes might be having unintended consequences, such as driving teachers to schools/districts that they perceive – rightly or wrongly – to be less risky for their careers.
The next “Teacher Follow-Up Survey” (the only detailed, national survey of teacher mobility/attrition) will be conducted in 2012-13. We should pay close attention to its results, especially to any changes in the reasons departing teachers give for leaving the profession, and where they end up. In the meantime, it would seem to be a good idea for states and/or large districts to make sure they’re collecting these attrition/mobility data, and commission their own research (I’m not aware of any such efforts currently in progress, but leave a comment if you are).
Another pretty basic, related measure would be the number of teacher applicants in these states. Although the job market is currently loose, it should still be relatively easy to spot large increases or decreases in the applicant pool. These shifts, if sustained over 2-3 years, could also serve as (imperfect) signals that a new group of candidates is being attracted to – or repelled by – the profession and the conditions of work.
Outside of changes in attrition, mobility and recruitment, another fairly obvious potential source of information about “what’s happening” to the teaching profession would be statewide (or, perhaps, multi-state) surveys of teachers’ opinions and characteristics. Even a relatively small, random sample annual survey of teachers could monitor how teachers are adjusting and reacting in states and districts where large policy changes are being enacted. Such a survey would also be fairly inexpensive, and well worth it. A couple of examples are the induction surveys currently offered by the New Teacher Center, the Washington teacher survey reported in this paper, and the newly-released teacher survey in Tennessee. In short, if you want to see what teachers think about these policies and their effects, ask them.
Finally, in regard to “quality,” the value-added literature is a potential source of information, though it should be only one of several. The VAM models can detect – albeit imperfectly – shifts in the teacher effects distribution over time. I would pay particular attention to the estimates for beginning teachers – i.e., whether new teachers are more “effective” than in the past, and whether they are improving more quickly than their counterparts in previous years (at least in terms of raising test sc0res) – again, especially in those states that have made major policy changes. Also, as mentioned above, the “effectiveness” of teachers who leave the profession is also worth monitoring.
This is, obviously, not an exhaustive list – it’s just a few ideas (none particularly original). The point is that the recent, huge changes in teacher personnel policies cannot really be evaluated well using only student testing data. In the states that have made these changes – as well as elsewhere – we need to start paying more attention to teachers, particularly to changes in their behavior, attitudes and characteristics. After all, if we’re going to understand how policies focused on “teacher quality” are working out, the best place to look is to teachers.
Of course, whether we respond appropriately to this evidence is a completely different – and more important – question. But one step at a time.
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.