Skip to main content

Bunkum Awards 2010

2010 was a banner year for bunk.  Alas, the Bunkums can only “honor” the true crème de la crème of bunkiness. This year’s victors include contributions from perennial frontrunners like the Thomas B. Fordham Institute, the Heartland Institute, and the Heritage Foundation, but surprise winners included the South Carolina Policy Council Education Foundation, which until now had toiled unrecognized in the minor leagues. And we also were compelled to open the contest to an important new competitor – the U.S. Department of Education – which seems poised to contribute an exceptional volume of bunk for the foreseeable future.

The ‘Magic Potion’ Award

South Carolina Policy Council Education Foundation for NEPC Review: How School Choice Can Create Jobs for South Carolina (December 2009)

This South Carolina Policy Council Education Foundation report, authored by Sven Larson, wins this year’s prize for the most fatuous cause-and-effect claim. The claim is that school choice will miraculously (our word, not theirs) decrease the unemployment problem of five poor, rural South Carolina counties. Reading this report, one feels transported to an old-time traveling medicine show peddling magic potions: That’s right, ladies and gents, just by being exposed to school choice programs, students will be given a dose of the elixir of private entrepreneurialism that will result in 329 student jobs in 123 new businesses!

As our reviewer explains, the claims in the report are based overwhelmingly on the “tuitioning” programs established in Vermont and Maine back in the 19th century, whereby very small towns that do not operate schools pay tuition to schools in other towns. Milwaukee’s choice adventures are also cited, as children there are said to be more entrepreneurial. But none of the source documents were peer-reviewed and the data are simply cross-sectional; no causal inferences are (sensibly) possible. Moreover, while ascribing New England small town characteristics to South Carolina counties may seem fanciful, it is not too far a reach for Mr. Larson. Unencumbered with traditional citations, the author announces that the benefits of vouchers are “widely documented.”

The ‘Remedial Arithmetic’ Award

Cato Institute for NEPC Review: They Spend WHAT? The Real Cost of Public Schools (March 2010)

The goal of this CATO Institute report was to calculate a comprehensive amount spent by K-12 public school districts. Yet, not being content with the actual legally required definitions and conventions used by trained and experienced finance specialists, researcher Adam Schaeffer calculated the “real” costs of public education using his own, extraordinarily creative formula. He found schools cost a lot more than he thought they should. In fact, his estimates of “real” costs vary from 3% to 151% of the government’s numbers.

The CATO breakthrough that we most honor with this award was the decision by Schaeffer to double-count expenditures by adding both capital construction and debt service to his calculation. As our reviewer explained, “most capital construction expenditures are not paid with taxpayer dollars. They are paid with proceeds from bonds. Taxpayer dollars then service the debt. This is key. The cost of a house bought with a loan is not the purchase price plus the cost of the loan. For a school district, what taxpayers are paying for in any year is debt service in that year.” Interestingly, Schaeffer then infers that it’s the National Center for Education Statistics method of cost-calculation that’s designed to mislead the public.

The ‘Plural of Anecdote is Not Data’ Award

Thomas B. Fordham Institute for NEPC Review: Charter School Autonomy: A Half-Broken Promise (Thomas B. Fordham Institute and Public Impact, April 2010)
Reason Foundation for NEPC Review: Fix the City Schools: Moving All Schools to Charter-Like Autonomy (March 2010)

Fittingly, we have a double winner in this plural category.

In Fix the City Schools, the Reason Foundation argues for "portfolio" school districts focused on closing low-performing schools and opening new ones under the management of autonomous corporations and sings Hosanna praises to the improvements in student achievement in New Orleans in the post-Katrina era. However, assuming real gains were made, there are myriad reasons unrelated to the portfolio approach that could explain some or all of them, including the massive exodus of low-income children from the city, plus a significant increase in resources. Miraculously, the findings from New Orleans, supplemented by examples from other cities, all evangelically testify to the healing powers of the portfolio school approach.

Not to be outdone, the Fordham Institute, in Charter School Autonomy,contends that charter schools have been deprived of the autonomy necessary for them to deliver on the innovative practices they promised. How could charters truly be innovative if they are still laboring under the yoke of bureaucratic control? It’s an interesting excuse for the not-particularly-impressive results of charter schools. But outside of anecdotes and rhetoric, there is nothing in this report that addresses autonomy in relation to financial performance, resource allocation, academic results, or other key school characteristics and outcomes. The authors simply fail to address their research question – whether and how authorizers’ constraints have had an adverse impact upon charter school autonomy and success.

Now, if we were being completely fair, this award would also be shared by the Blueprint research summaries published by the U. S. Department of Education. All six reviews noted the paucity of research evidence, and the department’s documents were replete with shadowed boxes providing war stories and anecdotes of “success.” In some cases, the boxed anecdotes were actually longer than the accompanying text. But in the “share the wealth” spirit of the administration, we decided that others should be allowed to bask in the glory of this Bunkum Award.

The ‘F Double Minus’ Award

Heartland Institute for NEPC Review: 2010 State School Report Card (October 2010)
Education Trust for NEPC Review: Stuck Schools: A Framework for Identifying Schools Where Students Need Change - Now (Education Trust, February 2010)

There is a crowded industry devoted to rating and ranking schools and states. The cool part about these ratings is that the imperious think tank raters get to assign a grade of F Double Minus to states and schools that have not adopted their group’s cherished reforms.

The competition is especially plentiful and fierce in this category, but Herbert Walberg and Mark Oestreich of the Heartland Institute vanquished all contenders with their 2010 State School Report Card. Our honorees ranked states (and the District of Columbia) on student achievement, low education expenditures, and “adherence to learning standards,” as well as a ranking based on an average of the first three. They created the indices and validated them by a rigorous examination of their preconceived proclivities and then highlighted the top- and lowest-performing states for each of the indices. Although the report explains how the indices were devised, it does not cite any research or rationale to support the methods, other than the data sources themselves plus reports from “several think tanks.” As the report acknowledges, the researchers also don’t even try to control for state variations in demographics or other factors. The report then assigns letter grades to each of the states (plus DC), with a forced distribution: 10 states are assigned A's, B's, C's, and D's, and 11 states must get F's. The beauty of this normative system is that it guarantees it will always be raining – there will always be failing states. Even setting aside problems with the criteria chosen, this seems a bit problematic. Anyway, after acknowledging that there is “very little empirical evidence” as to why some states do better than others, the authors stroll along to their conclusion that education quality is poor and that school choice is the obvious remedy. Assuming there was an acceptable method in this pile of subjective accretion, it is completely unconnected with these final recommendations.

The runner up in this category is a report from the Education Trust, called Stuck Schools, which suggests a framework for identifying chronically low-performing schools in need of turnaround. Among four main problems identified by the reviewer was the following:the norm-referenced methodology guarantees "failed" schools independent of any true performance or improvement level by the school. Sound familiar?

The 'If I Say It Enough, Will It Still Be Untrue?' Award

Heritage Foundation for NEPC Review: Closing the Racial Achievement Gap (September 2010)

While the Heritage Foundation receives this award for publishing Closing the Racial Achievement Gap, by Matthew Ladner and Lindsey Burke, it should really share this award with the think tanks in Colorado, New Mexico, Arizona, Wisconsin, Oklahoma, Utah, and Indiana, as well as the Hoover Institution and the Pacific Research Institute, all of which also published essentially the same report. Perhaps even the derivative articles by Dr. Ladner at National Review Online and Foxnews.com deserve some recognition.

But Ladner’s fecundity isn’t really what sets this work apart. It’s his willingness to smash through walls of basic research standards in his dogged pursuit of his policy agenda. In this case, the agenda is a passel of Florida reforms: vouchers funded by tax credits, charter schools, online education, performance-based teacher pay, test-score grading of schools and districts, test-based grade retention, and alternative teacher certification. He likes them. A lot. And he contends that other states should adopt the same package because, in his vision, they have clearly caused an improvement in Florida schools.

The problem, alas, is with the “caused” part. Nothing in the data or analyses of Dr. Ladner or the Heritage Foundation comes even close to allowing for a causal inference. Instead, the report offers uncontrolled descriptive data focused on NAEP fourth-grade reading scores. Those scores increased, so Ladner’s favored policies must be working. Our reviewer threw a few pails of cold water on that conclusion, noting first that other policies were also in effect during the time period studied. Florida has an excellent supportive reading program aimed at early grades; its accountability system strongly targets resources at lower-performing schools, and it has one of the nation’s most aggressive class-size reduction reforms. Ladner has steadfastly forsworn the possibility that these factors may have played a major role in any achievement improvements.

The reviewer also pointed out the reason why Ladner focused on fourth-grade reading and why NAEP growth in other subjects and grades wasn’t nearly as impressive: the researchers failed to properly address the state’s policy of retaining third-graders with low reading scores. This gave the bottom-scoring students another year to grow before they took the fourth-grade reading test – voilà. By analogy, consider growth in height instead of growth in test scores. If two states wanted to measure the average height of their fourth-graders, but one state (Florida) first identified the shortest 20% of third-graders and held them back to grow an additional year before measurement, the study’s results might be considered biased.

But Dr. Ladner has found his bone, and he’s not letting go. When confronted with the reviewer’s critique, he responded, "The change averse may wish to quibble over the details, or agonize over just what reform did how much of what," but the bottom line is that Florida’s fourth-grade scores increased dramatically. Indeed they did. And no research-bound egghead is going to mess up his good causal story.