Surveying the Teacher Opinion Landscape
I’m a big fan of surveys of teachers’ opinions of education policy, not only because of educators’ valuable policy-relevant knowledge, but also because their views are sometimes misrepresented or disregarded in our public discourse.
For instance, the diverse set of ideas that might be loosely characterized as “market-based reform” faces a bit of tension when it comes to teacher support. Without question, some teachers support the more controversial market-based policy ideas, such as pay and evaluations based substantially on test scores, but most do not. The relatively low levels of teacher endorsement don’t necessarily mean these ideas are “bad,” and much of the disagreement is less about the desirability of general policies (e.g., new teacher evaluations) than the specifics (e.g., the measures that comprise those evaluations). In any case, it’s a somewhat awkward juxtaposition: A focus on “respecting and elevating the teaching profession” by means of policies that most teachers do not like.
Sometimes (albeit too infrequently) this tension is discussed meaningfully, other times it is obscured – e.g., by attempts to portray teachers’ disagreement as “union opposition.” But, as mentioned above, teachers are not a monolith and their opinions can and do change (see here). This is, in my view, a situation always worth monitoring, so I thought I’d take a look at a recent report from the organization Teach Plus, which presents data from a survey that they collected themselves.
The primary conclusion of this report is that there is a split between younger and more experienced teachers in terms of their policy views, with the two groups defined in this analysis as teachers with 10 or fewer years, versus those with 11+ years on the job (Teach Plus calls the former “The New Majority,” based on a recent paper on trends in teacher characteristics). In general, the report concludes, the younger group is more supportive of policies such as performance-based pay, new evaluations that incorporate “student growth” and 401K or defined contribution pension plans. In contrast, there is more “inter-generational” agreement on other policies, such as collaboration time, class size and extended time. The authors put forth a bit of discussion regarding what these results mean for teacher retention and other outcomes.
For the record, I do not mean to take pot shots at Teach Plus’ work. They are not a professional polling organization (nor am I a professional pollster), and I applaud their efforts to listen to teachers via surveys. Furthermore, while I might take issue with some of their interpretations, the narrative is not blatantly skewed toward a specific perspective. But I think this report illustrates a few key issues, the most important of which are not at all specific to Teach Plus.
First, with regard to this particular report, this is not a scientific survey, a crucial fact that is not even mentioned once within the body of the report. Teach Plus collected surveys from roughly 1,000 respondents. This is no easy task no matter how it’s done, but the survey was conducted online, distributed via “social media sites and education organizations.” The respondents may therefore be different from the typical U.S. teacher in terms of their views, as may be the relationship between opinions and experience. This is especially salient given that Teach Plus is an advocacy group, and thus their supporters and followers are likely overrepresented in the survey.
Non-random surveys can be useful, but they always require very careful interpretation, and, if they’re to be used to draw conclusions about teachers in general, one must carry out a series of diagnostic checks to determine whether the sample’s measurable characteristics match the population (see this well-done recent Education Sector report for an example of a random survey).
Teach Plus’ discussion of their sample is limited to an appendix, in which they almost seem to imply that it is more or less valid because the percentage of teachers with 10 or fewer years of experience is similar to the U.S. average (at least what it was in 2007-08, the last available year of the Schools and Staffing Survey, which is among the only national surveys of U.S. teachers).
Even taken at face value, this is painfully insufficient. Making things worse, the limited information provided in the Teach Plus report actually suggests the need for serious caution about the sample (see the first footnote, below).*
On a less important note, I may be missing something, but I was unable to find a complete set of tabulations and wordings. This may sound nitpicky, but it really does make it more difficult to interpret the rather limited, highly aggregate set of results presented in the report.**
If you go to the trouble of collecting survey data, you might as well as present all of it, even if its done in a supplemental document.
But my two most important points are not criticisms per se, and they are not at all unique to this particular survey. The first is a suggestion about question wording. The Teach Plus report finds that their less experienced respondents (1-10 years) are more receptive than their veteran counterparts to the idea that “student growth should be part of teacher evaluations.” This wording, though common, is not as helpful as it could be. “Student growth” means different things to different teachers.
Some teachers may associate “growth” with standardized test-based measures (e.g., value-added), whereas others may see it differently (e.g., they may interpret it as growth based on other types of assessments). This is important not only because the choice of measures is a very contentious issue, one that states and districts are currently facing, but also because a significant proportion of teachers react favorably to using “growth” or “progress” in policies, yet this support drops to extremely low levels if you ask directly about standardized test scores. And these perceptions may vary by experience or other characteristics. So, I think it may be time for surveys to stop asking about “growth” or “progress,” and instead be more specific (preferably querying views on different types of “progress” measures). This would be more helpful in the actual debate about evaluations, as well as other policies, such as performance pay.**
Second, and most generally, there’s another important (though perhaps obvious) distinction I’d like to point out, one that is sometimes obscured a bit by rhetoric about the “new generation of teachers.” This is the difference between an age/experience “effect” and a cohort “effect” (in this context, “association” is a better word than “effect”). For example, it’s not surprising that, at any given time, younger, less experienced teachers are more receptive to ideas such as receiving additional salary instead of larger pension contributions (here’s a related paper). By itself, that’s best characterized as an age/experience “effect.”
A cohort “effect,” on the other hand, would be if the new generation of teachers holds different views than preceding cohorts. In other words, are today’s less experienced teachers more supportive on issues such as pensions or seniority than less experienced teachers from previous years?
This matters because, put simply, there may be more aggregate support for some issues in 2012 in no small part because teachers have fewer years on the job, on average, today than in previous years. Put differently, there’s a meaningful difference between increasing or decreasing support levels due to demographic changes versus “real” shifts in attitudes, especially when those characteristics (e.g., experience) are not fixed. It’s true that average experience is declining in recent years (and will most likely continue to do so for a while), and that increasing retention is most important during the first few years in the classroom. Nevertheless, we should be careful about drawing conclusions about changing attitudes among the “new generation” of teachers based solely on breakdowns by age or experience, rather than changes over time within these groups. At the very least, we should acknowledge the difference.
So, overall, I think it’s great when Teach Plus and other organizations go to the trouble of collecting teacher survey data and presenting it for public consumption. Even voluntary surveys can be useful if properly interpreted, and, again, I am more than receptive to the possibility that teachers’ attitudes toward issues like evaluation and compensation are evolving. But, if we’re going to listen to teachers when shaping policy – and we most certainly should do so – let’s make sure we’re doing it correctly.
- Matt Di Carlo
*****
* For instance, 20 percent of Teach Plus’ sample has 1-5 years of experience (5 percent 1-2 years, 15 percent 3-5 years). Nationally, however, in 2007-08, 19 percent of public school teachers had between 1-3 years of experience, which means that the proportion of U.S. teachers with 1-3 years in 2007-08 was roughly equivalent to the Teach Plus proportion with 1-5 years. These underlying distributions matter. Similarly, in the appendix of the report, we learn that 10 percent of the Teach Plus sample is comprised of charter school teachers. Nationally, in 2007-08, the proportion was roughly two percent. Given rapid charter school proliferation, this is almost certainly higher now, but it’s doubtful that it’s anywhere near 10 percent. And, once again, we would really need a bunch of other variables to evaluate the sample.
** For example, for many of the questions, respondents were asked to choose from one of five categories ranging from “very important” to “not at all important” (the actual label for the latter category is not specified, so that’s my guess). But none of the results in the report break down responses by category. They either present the responses as averages from the 1-5 scale (not a great practice for this type of variable), or as “percent who agree/disagree” dichotomies. Neither permits the reader to distinguish between different levels of agreement/ disagreement. Similarly, there are no breakdowns of attitudes by experience that don’t rely on the “10 or fewer years/11+ years” dichotomy. Although estimates for smaller subsamples will be more imprecise, variation in views within these groups is very important. For instance, given that the narrative is primarily focused on identifying implications for teacher retention, estimates for teachers with 1-3 (or 1-5) years on the job would seem to be the most pertinent.
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.