Shanker Blog: The Wrong Way to Publish Teacher Prep Value-Added Scores
As discussed in a prior post, the research on applying value-added to teacher prep programs is pretty much still in its infancy. Even just a couple of years of would go a long way toward at least partially addressing the many open questions in this area (including, by the way, the evidence suggesting that differences between programs may not be meaningfully large).
Nevertheless, a few states have decided to plow ahead and begin publishing value-added estimates for their teacher preparation programs. Tennessee, which seems to enjoy being first — their Race to the Top program is, a little ridiculously, called “First to the Top” — was ahead of the pack. They have once again published ratings for the few dozen teacher preparation programs that operate within the state. As mentioned in my post, if states are going to do this (and, as I said, my personal opinion is that it would be best to wait), it is absolutely essential that the data be presented along with thorough explanations of how to interpret and use them.
Tennessee fails to meet this standard.
For example, one of the big issues is separating selection (who applies and gets accepted to programs) from actual program effects (how well the candidates are trained once they get there). That is, a given program’s graduates may have relatively high value-added scores, but that doesn’t necessarily mean that the program they attended was the reason for the high scores. It may be that certain programs, by virtue of their location (or, perhaps, reputation), simply attract better candidates.
(Similarly, there may be some bias in the estimates stemming from where candidates find jobs.)
Now, to be clear, this particular issue might not be relevant to some users of the data (e.g., districts looking to hire teachers). But it might absolutely matter, for example, if policy makers or the public begin, formally or informally, holding programs accountable for their graduates’ value-added results, or if different programs start looking at the ratings to copy best practices. In those contexts, it would be very important for users to understand that the value-added estimates published by Tennessee reflect both program and selection/placement effects, and that they shouldn’t necessarily read too much into the data.
Tennessee’s reports provide a good amount of data, and a decent explanation of the technical details (e.g., what statistical significance means). Unless I’m missing something, however, there is not a shred of guidance as to what they mean and how to use them, including the considerable limitations of what the data are telling us about program effects versus these other potentially confounding factors.
In the section called “Limitations of the Data,” the state provides only two: A note that the ratings are based entirely on program graduates in tested grades and subjects; and a promise that the state will soon be expanding the rankings to include more outcomes, including classroom observations and other components of the state’s teacher evaluation system (which, by the way, could plausibly suffer from the same limitations as the value-added estimates).
Look, it may seem like I’m making a big deal out of nothing here, but to me, the bottom line is that Tennessee (and other states) have knowingly jumped into these teacher prep value-added waters, presumably based on their belief that the data provide at least some valuable information about teacher preparation programs. But, to the degree that’s true, guidance regarding interpretation and use is that much more important, particularly for certain uses, such as the public, future teachers choosing a program and schools looking to the ratings to copy/avoid practices among high/low rated programs.
Tennessee’s apparent failure to provide such guidance strikes me as an inexplicable lack of due diligence.
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.