Skip to main content

Bunkum Awards 2009

While the social science in the reports honored this year may be sub-par, the reports often had very high production values, glossy paper, multi-color printing, and artful layouts. Kitted out as they are with bibliographies, footnotes, charts and tables, policy makers or laypeople may be forgiven for thinking that these reports are based on high-quality research. Of the twenty think tank reports reviewed in 2009, five have been deemed worthy of a Bunkum.

The Time Machine Award

Reason Foundation for NEPC Review: Weighted Student Formula Yearbook 2009 (April 2009)

This Reason Foundation report has multiple features that make it an award winner. It engages in definitional acrobatics, pouring a kitchen sink’s worth of assorted reforms into a vessel it calls Weighted Student Formula (WSF) reforms. And, in a truly breathtaking innovation, the report enters its time machine and attributes positive reform outcomes to policy changes that had not yet been implemented. In broad terms, WSF reforms involve linking funding to each student, with that funding calculated as the student’s base allocation and any additional funds for special needs, economic deprivation or other reasons. The Reason report somehow manages to squeeze into this WSF concept three additional reforms: (a) site-based management; (b) site-based budgeting; and (c) school choice. The expert third party reviewer said this about the Reason “umbrella labeled as WSF:” “[it] deceptively suggests that all related policies are necessarily good—even going so far as to credit those policies for improvements that took place before the policies were implemented.” “The report then irresponsibly recommends untested, cherry picked policy elements, some of which may substantially undermine equity for children in the highest-need schools within major urban districts.” For example, the plan suggests that extra funds for economically deprived students be eliminated but that added money should be given to gifted and talented students. The report also ignores a large body of relevant literature on within-district equity and school site management in its uncritical effort to find support for the foundation’s ideological policy preferences.

The Misdirection Award: Keep our Eyes off What Works

Hoover Institution for NEPC Review: Reroute the Preschool Juggernaut (June 2009)

This electronically published book, published by the Hoover Institution and authored by Fordham Foundation president and Hoover Senior Fellow Checker Finn, joins the ranks of our dubious honorees. Misdirecting readers from a mountain of empirical, peer reviewed and widely accepted evidence, Finn cherry-picks a few weak studies to criticize proposals for universal preschool. Our third-party expert reviewer summed up Finn’s work as “errors, exaggerations, misrepresentation and logical inconsistency.” Among the reviewer’s catalog of fourteen major errors, he notes that actual costs are exaggerated by a factor of two while immediate and long-term well-documented effects are under-reported or not reported accurately. The book also ignores numerous meta-analyses of preschool research and cost-benefit studies that have found a clear social and financial benefit for early education, amounting to hundreds of thousands of dollars per child.

Finn also praises the Florida program as a model even though there are no accountability data available from that program, and the program is structurally inequitable. In the push for a non-universal program, “The book proposes a vague, targeted alternative that is entirely fictional, but which, like the mythical gryphon, is especially powerful and majestic.”

The Data Dodger Award

New York City Charter Schools Evaluation Project for NEPC Review: How New York City’s Charter Schools Affect Achievement (September 2009)

This report, published by the National Bureau of Economic Research and lead authored by Hoover Institution Senior Fellow Caroline Hoxby, initially escaped our attention. But the Washington Post and many charter advocates trumpeted the findings so loudly that we really had no choice but to seek a review. We were glad we did. In a revolutionary reversal of research procedures, Hoxby and her colleagues announced her results to the media and policymakers while withholding much of the actual information on which her results were based. Hers was a breakthrough in scientific methodology and a tribute to the non-accountability of pro-accountability researchers.

As for the report itself, Hoxby claimed that charter schools in New York City worked better than public schools, pointing out that she had taken advantage of a natural experimental design as students were assigned by lottery. However, the expert third party review notes several likely sources of bias which probably resulted in Hoxby’s inflated finding. Among these, the study relies on statistical models that include 3rd grade test scores, measured after the admission lotteries had taken place. Because of that timing, those scores could be affected by whether students attend a charter school, meaning that Hoxby’s chosen statistical models destroy the benefits of the very randomization that she and her supporters rely on as the main strength of the study’s design. Of course, the reviewer couldn’t quantify the extent of the overestimate since Hoxby had left out the information that would be needed for readers to engage in such an independent review and analysis. Trust, and don’t verify appears to be the operational accountability philosophy.

Nevertheless, accompanied by a formidable cloud of statistical formuli and dressed up in elaborate theoretical assumptions and explanations, this work was widely heralded as proving the policy wisdom of the charter school reform. As our reviewer pointed out, New York City’s charter schools might genuinely be improving student outcomes; however, this study—because of the information it withheld and its methodological shortcomings—does not and cannot resolve the issue.

The Innovations in Promoting Alternative Teacher Certification Award

Mathematica Policy Research for NEPC Review: An Evaluation of Teachers Trained Through Different Routes to Certification: Final Report (February 2009)

Given the great interest in alternative teacher and administrator preparation programs, studies such as this prize winner tend to attract considerable attention. And readers had reason to expect high quality from a federally funded study conducted by Mathematica Policy Research, a respected research organization.

The researchers, using a random assignment design, reported “no evidence” that traditionally trained teachers provided better student scores than non-traditional or alternatively trained teachers. There were no caveats in the announcement of the study’s conclusions. But our third party expert reviewers explained that there should, in fact, have been many, many caveats. Here’s just a taste: small sample size focused overwhelmingly on urban, poor, heavily minority, early grade students; troubling sampling methods; and a failure to distinguish the “treatments” that alternative certification and traditional certification teachers provided (meaning that members of the two compared groups were both undertrained and had substantially overlapping preparation experiences).

The reviewers stressed that the study’s primary limitations are due to the fact that it intentionally sampled from a unique subset of schools: those that routinely hire alternatively certified teachers. Since the study necessarily matched alternative-certified and traditionally-certified teachers working at the same hard-to-staff schools, it is quite likely that the traditionally certified teachers who made up the comparison group in this study were substantially less qualified than the average traditionally certified teacher.

The review documents additional problems as well. But what’s interesting, and particularly telling, is that even with the deck stacked strongly against traditional teacher preparation, the study included many analyses that found traditionally trained teachers outperformed their alternative route counterparts. It’s just that the report’s authors chose not to fully report and acknowledge these findings in the report’s conclusions.

The Annual Friedman Foundation Johnny-One-Note Award

Friedman Foundation for Educational Choice for NEPC Review: A Win-Win Solution: The Empirical Evidence on How Vouchers Affect Public Schools (February 2009) and for NEPC Review: The Fiscal Impact of Tax-Credit Scholarships in Montana (February 2009)
Buckeye Institute for NEPC Review: The High Cost of High School Dropouts in Ohio (February 2009)

The annual Friedman Foundation Johnny-One-Note Award for promoting an educational cure-all through the miracle of cloning goes to (drum roll please) the Friedman Foundation.

The Foundation has, over the past three years, cloned the same study on the cost of drop-outs in at least seven states, a tax credit voucher report in at least six states, and opinion polls on school choice in 15 states. Amazingly, all these reports lead to the same conclusion: vouchers and other forms of school choice will save money and improve student outcomes. (Given the miraculous power Friedman assigns to vouchers one might be forgiven for wondering if implementing voucher programs would take off unwanted weight and leave partners fully satisfied as well.) The basic technique used by Friedman researchers is to take the same report, change the name of the state, plug in some state-specific data, vary the title a bit, and come up with the predetermined conclusion.

We also should not fail to acknowledge a non-clone Friedman Foundation offering that we reviewed in 2009. The Foundation’s Win-Win report argues that vouchers help both the private and the public schools. It purports to gather all available evidence on the competitive effects of vouchers and is able to find only seventeen studies, most of which were produced by voucher advocacy organizations. From this thin and biased review, the report concludes that there is a consensus on the matter. In truth, existing research provides little reliable information about the competitive effects of vouchers, and this report does little to help answer the question.