The 'Mirror Image (What You Read Is Reversed)' Award - 2011 First Runner-Up

Gates Foundation for Learning About Teaching

Money can’t buy you everything. Yes, the $45 million allocated by the Gates Foundation to its “Measures of Effective Teaching” (MET) project is going to fund much more than this first-year evaluation report. And the MET project as a whole seems quite worthwhile. But one would hope that the evaluation of such a major project would be a serious effort. Unfortunately, that is not the case here. Using data from six school districts, the report examined correlations between student survey responses and value-added scores computed both from state tests and from higher-order tests of conceptual understanding. The study found that the measures are related, but only modestly, and the report interprets this as support for the use of value-added as the basis for teacher evaluations. The Gates Foundation touted the report as “some of the strongest evidence to date of the validity of ‘value-added’ analysis,” showing that “teachers’ effectiveness can be reliably estimated by gauging their students’ progress on standardized tests.” This would indeed be impressive were it not for the results suggesting exactly the opposite.

As our review points out, the data indicate that a teachers’ value-added for the state test is not strongly related to her effectiveness in a broader sense. Most notably, value-added for state assessments is correlated 0.5 or less with that for the alternative assessments included in the study, meaning that many teachers with low value-added using one test are in fact quite effective when judged by the other. As there is every reason to think that the problems with value-added measures apparent in the MET data would be worse in a high-stakes environment, these initial MET results raise serious questions about the value of student achievement data as a significant component of teacher evaluations – even if that’s not what the authors of the Gates report would have readers believe.