Bunkum Awards 2010

2010 was a banner year for bunk.  Alas, the Bunkums can only “honor” the true crème de la crème of bunkiness. This year’s victors include contributions from perennial frontrunners like the Thomas B. Fordham Institute, the Heartland Institute, and the Heritage Foundation, but surprise winners included the South Carolina Policy Council Education Foundation, which until now had toiled unrecognized in the minor leagues. And we also were compelled to open the contest to an important new competitor – the U.S. Department of Education – which seems poised to contribute an exceptional volume of bunk for the foreseeable future.

Bunkum Awards

The ‘Plural of Anecdote is Not Data’ Award

Fordham Institute and Public Impact for Review of Charter School Autonomy: A Half-Broken Promise
Reason Foundation for Review of Fix the City Schools: Moving All Schools to Charter-Like Autonomy

Fittingly, we have a double winner in this plural category.

In Fix the City Schools, the Reason Foundation argues for "portfolio" school districts focused on closing low-performing schools and opening new ones under the management of autonomous corporations and sings Hosanna praises to the improvements in student achievement in New Orleans in the post-Katrina era. However, assuming real gains were made, there are myriad reasons unrelated to the portfolio approach that could explain some or all of them, including the massive exodus of low-income children from the city, plus a significant increase in resources. Miraculously, the findings from New Orleans, supplemented by examples from other cities, all evangelically testify to the healing powers of the portfolio school approach.

Not to be outdone, the Fordham Institute, in Charter School Autonomy,contends that charter schools have been deprived of the autonomy necessary for them to deliver on the innovative practices they promised. How could charters truly be innovative if they are still laboring under the yoke of bureaucratic control? It’s an interesting excuse for the not-particularly-impressive results of charter schools. But outside of anecdotes and rhetoric, there is nothing in this report that addresses autonomy in relation to financial performance, resource allocation, academic results, or other key school characteristics and outcomes. The authors simply fail to address their research question – whether and how authorizers’ constraints have had an adverse impact upon charter school autonomy and success.

Now, if we were being completely fair, this award would also be shared by the Blueprint research summaries published by the U. S. Department of Education. All six reviews noted the paucity of research evidence, and the department’s documents were replete with shadowed boxes providing war stories and anecdotes of “success.” In some cases, the boxed anecdotes were actually longer than the accompanying text. But in the “share the wealth” spirit of the administration, we decided that others should be allowed to bask in the glory of this Bunkum Award.

The ‘Remedial Arithmetic’ Award

Cato Institute for Review of They Spend WHAT? The Real Cost of Public Schools

The goal of this CATO Institute report was to calculate a comprehensive amount spent by K-12 public school districts. Yet, not being content with the actual legally required definitions and conventions used by trained and experienced finance specialists, researcher Adam Schaeffer calculated the “real” costs of public education using his own, extraordinarily creative formula. He found schools cost a lot more than he thought they should. In fact, his estimates of “real” costs vary from 3% to 151% of the government’s numbers.

The CATO breakthrough that we most honor with this award was the decision by Schaeffer to double-count expenditures by adding both capital construction and debt service to his calculation. As our reviewer explained, “most capital construction expenditures are not paid with taxpayer dollars. They are paid with proceeds from bonds. Taxpayer dollars then service the debt. This is key. The cost of a house bought with a loan is not the purchase price plus the cost of the loan. For a school district, what taxpayers are paying for in any year is debt service in that year.” Interestingly, Schaeffer then infers that it’s the National Center for Education Statistics method of cost-calculation that’s designed to mislead the public.

The 'Good Enough for Government Work' Award

U.S. Department of Education for Review of A Complete Education and for Review of College- and Career-Ready Students and for Review of Fostering Innovation and Excellence and for Review of Great Teachers and Great Leaders and for Review of Meeting the Needs of English Learners and Other Diverse Learners and for Review of Successful, Safe, and Healthy Students

This year’s Grand Prize goes to Secretary Arne Duncan and his U.S. Department of Education staff for the exceptionally disappointing low quality of their research reviews supporting their plans for the reauthorization of ESEA (aka, the Blueprint). Our esteemed panel of judges solemnly considered whether the federal government was even eligible for such an award. With so many resources at its disposal, the government seems to have an unfair advantage. But the Blueprint research summaries stood out in two ways that we felt needed recognition. First, they almost religiously avoided acknowledging or using the large body of high-quality research that the federal government itself had commissioned and published over the years. Second, they first raised our expectations with repeated assurances that recommended policies would be solidly grounded in research – only to then dash those hopes in research summary after research summary.

The issues addressed in the Blueprint and the research summaries are certainly vital to the nation’s education system – standards, teacher quality, comprehensive education, special needs, safe and healthy students, and charter schools. But across the board, our reviewers found the work to be of inadequate quality. One reviewer was astounded that the administration did not mount a comprehensive defense of its central education policies. The research summary reviewed by another was described as a “political text that starts with a conclusion and then finds evidence to support it.” Then, there was the question of critical omissions – such as the complete absence of a rationale for the chosen accountability system and intervention models. In terms of sources, the research summaries were often found to be summarizing non-research. For example, only 10% of the 80 or so citations in the “Great Teachers, Great Leaders” summary referred to peer-reviewed research sources.

The ‘Magic Potion’ Award

South Carolina Policy Council Education Foundation for Review of How School Choice Can Create Jobs for South Carolina

This South Carolina Policy Council Education Foundation report, authored by Sven Larson, wins this year’s prize for the most fatuous cause-and-effect claim. The claim is that school choice will miraculously (our word, not theirs) decrease the unemployment problem of five poor, rural South Carolina counties. Reading this report, one feels transported to an old-time traveling medicine show peddling magic potions: That’s right, ladies and gents, just by being exposed to school choice programs, students will be given a dose of the elixir of private entrepreneurialism that will result in 329 student jobs in 123 new businesses!

As our reviewer explains, the claims in the report are based overwhelmingly on the “tuitioning” programs established in Vermont and Maine back in the 19th century, whereby very small towns that do not operate schools pay tuition to schools in other towns. Milwaukee’s choice adventures are also cited, as children there are said to be more entrepreneurial. But none of the source documents were peer-reviewed and the data are simply cross-sectional; no causal inferences are (sensibly) possible. Moreover, while ascribing New England small town characteristics to South Carolina counties may seem fanciful, it is not too far a reach for Mr. Larson. Unencumbered with traditional citations, the author announces that the benefits of vouchers are “widely documented.”

The 'If I Say It Enough, Will It Still Be Untrue?' Award

Heritage Foundation for Review of Closing the Racial Achievement Gap

While the Heritage Foundation receives this award for publishing Closing the Racial Achievement Gap, by Matthew Ladner and Lindsey Burke, it should really share this award with the think tanks in Colorado, New Mexico, Arizona, Wisconsin, Oklahoma, Utah, and Indiana, as well as the Hoover Institution and the Pacific Research Institute, all of which also published essentially the same report. Perhaps even the derivative articles by Dr. Ladner at National Review Online and Foxnews.com deserve some recognition.

But Ladner’s fecundity isn’t really what sets this work apart. It’s his willingness to smash through walls of basic research standards in his dogged pursuit of his policy agenda. In this case, the agenda is a passel of Florida reforms: vouchers funded by tax credits, charter schools, online education, performance-based teacher pay, test-score grading of schools and districts, test-based grade retention, and alternative teacher certification. He likes them. A lot. And he contends that other states should adopt the same package because, in his vision, they have clearly caused an improvement in Florida schools.

The problem, alas, is with the “caused” part. Nothing in the data or analyses of Dr. Ladner or the Heritage Foundation comes even close to allowing for a causal inference. Instead, the report offers uncontrolled descriptive data focused on NAEP fourth-grade reading scores. Those scores increased, so Ladner’s favored policies must be working. Our reviewer threw a few pails of cold water on that conclusion, noting first that other policies were also in effect during the time period studied. Florida has an excellent supportive reading program aimed at early grades; its accountability system strongly targets resources at lower-performing schools, and it has one of the nation’s most aggressive class-size reduction reforms. Ladner has steadfastly forsworn the possibility that these factors may have played a major role in any achievement improvements.

The reviewer also pointed out the reason why Ladner focused on fourth-grade reading and why NAEP growth in other subjects and grades wasn’t nearly as impressive: the researchers failed to properly address the state’s policy of retaining third-graders with low reading scores. This gave the bottom-scoring students another year to grow before they took the fourth-grade reading test – voilà. By analogy, consider growth in height instead of growth in test scores. If two states wanted to measure the average height of their fourth-graders, but one state (Florida) first identified the shortest 20% of third-graders and held them back to grow an additional year before measurement, the study’s results might be considered biased.

But Dr. Ladner has found his bone, and he’s not letting go. When confronted with the reviewer’s critique, he responded, "The change averse may wish to quibble over the details, or agonize over just what reform did how much of what," but the bottom line is that Florida’s fourth-grade scores increased dramatically. Indeed they did. And no research-bound egghead is going to mess up his good causal story.