Skip to main content

The Impact of Race to the Top is an Open Question (But at Least It’s Being Asked)

You don’t have to look very far to find very strong opinions about Race to the Top (RTTT), the U.S. Department of Education’s (USED) stimulus-funded state-level grant program (which has recently been joined by a district-level spinoff). There are those who think it is a smashing success, while others assert that it is a dismal failure. The truth, of course, is that these claims, particularly the extreme views on either side, are little more than speculation.*

To win the grants, states were strongly encouraged to make several different types of changes, such as adoption of new standards, the lifting/raising of charter school caps, the installation of new data systems and the implementation of brand new teacher evaluations. This means that any real evaluation of the program’s impact will take some years and will have to be multifaceted – that is, it is certain that the implementation/effects will vary not only by each of these components, but also between states.

In other words, the success or failure of RTTT is an empirical question, one that is still almost entirely open. But there is a silver lining here: USED is at least asking that question, in the form of a five-year, $19 million evaluation program, administered through the National Center for Education Evaluation and Regional Assistance, designed to assess the impact and implementation of various RTTT-fueled policy changes, as well as those of the controversial School Improvement Grants (SIGs).

The research grants were awarded to three organizations: Mathematica Policy Research, the American Institutes for Research (AIR), and Social Policy Research Associates (SPRA). I’m not particularly familiar with SPRA, but both Mathematica and AIR are highly-respected research outfits with a great deal of experience performing large-scale evaluations of policy interventions.

Details are (predictably) limited at this early phase, but, according to the brief description of the grant (follow the link above), the analysis will employ various types of data – surveys, interviews and student testing outcomes. The evaluation of the RTTT/SIG school turnarounds will include data from over 130 districts in 30 states (using regression discontinuity). The study of RTTT implementation will span all 50 states, while analysis of its impact will apparently rely on state-level NAEP data.

The project is currently in the data gathering phase, with the first round of results expected some time in the spring of 2014. No doubt there will be a few studies before then (see here for one of the first analyses of the SIG grants in California), and extending years after this formal evaluation period has been concluded.

But the important point here is that the exceedingly strong opinions about RTTT (and SIG), which range from entirely positive to entirely negative, are just that – opinions.

The tendency of our public policy discourse toward immediate gratification and instant results is almost never well-suited for policy changes of this magnitude, which are incredibly complex, take years to play out, and certainly cannot be assessed, even tentatively, using raw proficiency rate changes. They require the kind of in-depth data gathering and analysis that I trust will be carried out during these evaluations.

Like it or not, we’ll have to wait a while before getting a sense of how these two programs have worked and how they might be improved, though I wouldn’t anticipate super-conclusive results, particularly for RTTT, and it’s unlikely that the researchers will be able to draw many  conclusions about the unique impact of each individual RTTT component.

Nevertheless, it’s a good thing that we’re asking the questions, and it would be even better if we all stopped pretending as if we already have the answers.

- Matt Di Carlo

*****

* I suppose one might argue that RTTT was successful in compelling many states to make policy changes (which they had to do in order to have a shot at winning the grants). That’s obviously true, but I think it’s fair to say that we can only call RTTT a successful program if the policies themselves end up working out.

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Matthew Di Carlo

Matthew Di Carlo is a senior research fellow at the non-profit Albert Shanker Institute in Washington, D.C. His current research focuses mostly on education polic...