Teacher Surveys and Standardized Tests: Different Data, Similar Warning Labels
As a strong believer in paying attention to what teachers think about policy, I always review the results of MetLife’s annual teacher survey. The big theme of this year’s survey, as pushed by the press release and reiterated in most of the media coverage, was that job satisfaction among teachers is at “its lowest level in 25 years.”
It turns out that a change in question wording precludes straight comparisons of responses to the teacher job satisfaction question with those of the surveys conducted in 2009 or earlier. Even slight changes in wording can affect results, though it seems implausible that this one had a dramatic effect. In any case, it is instructive to take a look at the reactions to this finding. If I may generalize a bit here, one “camp” argued that the decline in teacher satisfaction is due to recent policy changes, such as eroding job protections, new evaluations, and the upcoming implementation of the Common Core. Another “camp” urged caution – they pointed out that not only is job satisfaction still rather high, but also that the decline among teachers can be found among many other groups of workers too, likely a result of the ongoing recession.
Although it is more than plausible that recent reforms are taking a toll on teacher morale, and this possibility merits attention, those urging caution, in my view, are correct. It’s simply not appropriate to draw strong conclusions as to what is causing this (or any other) trend in aggregate teacher attitudes, and it’s even more questionable to chalk it up to a reaction against specific policies, particularly during a time of economic hardship.
I would offer two additional, boilerplate caveats about these results.
The first is that this survey, like most surveys, is prone to various forms of error. This means that smaller differences (or changes between years) should be viewed with caution. For instance, the five point decline between 2011 and 2012 in the percentage of teachers saying they are “very satisfied” with their jobs is probably large enough to be taken seriously, but may also be due in part to imprecision from sampling error, weighting, etc.*
The second, highly related warning is that the data are of course cross-sectional, and thus trends in responses might to some extent reflect changes in the composition of the sample/workforce, rather than actual shifts in teachers’ attitudes. This is particularly salient given that the teacher workforce has been changing rapidly. For example, newer teachers are less likely to express dissatisfaction than their more experienced colleagues. So, if there are more novice teachers now than in the past, this might create the “illusion” of increasing satisfaction (or mitigate “real” declines in satisfaction), particularly when responses are viewed over longer periods of time.
But my primary point is that these warnings – the need for caution in drawing causal inferences, compositional changes, etc. – apply just as strongly to the interpretation of student test scores. Those who jumped at the chance to use the teacher survey trends to argue against policies they oppose are implicitly endorsing a “method” that, when applied to testing data, is often used to support those same policies. And, similarly, if you’re urging caution about the interpretation of survey results, you should be just as vigilant about that of testing data.
All of us (myself included, of course) are prone to interpret results in a manner that squares with our pre-existing beliefs, and it’s often very difficult to monitor one’s self in this regard. Nevertheless, I am still naïve enough to believe that we can keep improving, and so I am heartened to see people urge caution, even if we all slip up now and again.
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.