William J. Mathis: (802) 383-0058, email@example.com
BOULDER, CO (February 23, 2017) – The 89th Academy Awards will be celebrated this weekend, which means it’s also time to announce the winner of the 2016 National Education Policy Center Bunkum Award. We invite you to enjoy our 11th annual tongue-in-cheek salute to the most egregiously shoddy think tank report reviewed in 2016.
This year’s Bunkum winner is the Center for American Progress (CAP), for its report, Lessons From State Performance on NAEP: Why Some High-Poverty Students Score Better Than Others.
The CAP report is based on a correlational study with the key finding that high standards increase learning for high-poverty students. The researchers compared changes in states’ test scores for low-income students to changes to those states’ standards-based policy measures as judged by the researchers. Their conclusions were that high standards lead to higher test scores and that states should adopt and implement the Common Core.
Alas, there was much less than met the eye.
In choosing the worst from among the many “worthy” contenders competing for the Bunkum Award, our judges applied evaluation criteria from two guidelines for how to understand research, and .
Here’s how the CAP report scored:
- Was the design appropriate?
No: The design was not sensitive, so they tossed in “anecdotes” and “impressions.”
- Were the methods clearly explained?
No: The methods section is incomplete and obtuse.
- Were the data sources appropriate?
No: The variables used were inadequate and were aggregated in unclear ways.
- Were the data gathered of sufficient quality and quantity?
No: The report uses just state-level NAEP scores and summary data.
- Were the statistical analyses appropriate?
No: A multiple correlation with just 50 cases is too small.
- Were the analyses properly executed?
Cannot be determined: The full results were not presented.
- Was the literature review thorough and unbiased?
No: The report largely neglected peer-reviewed research.
- Were the effect sizes strong enough to be meaningful?
Effect sizes were not presented, and the claims are based on the generally unacceptable 0.10 significance level.
- Were the recommendations supported by strong evidence?
No: Their conclusion is based on weak correlations.
The fundamental flaw in this report is simply that it uses inadequate data and analyses to make a broad policy recommendation in support of the Common Core State Standards. A reader may or may not agree with the authors’ conclusion that “states should continue their commitment to the Common Core’s full implementation and aligned assessments.” But that conclusion cannot and should not be based on the flimsy analyses and anecdotes presented in the report.
Watch the 2016 Bunkum Award video presentation, read the Bunkum-worthy report and the review, and learn about past Bunkum winners and the National Education Policy Center’s Think Twice Think Tank Review project: http://nepc.colorado.edu/think-tank/bunkum-awards/2016
About the Think Twice Think Tank Review Project:
Many organizations publish reports they call research – but are they? These reports often are published without having first been reviewed by independent experts – the “peer review” process commonly used for academic research.
Even worse, many think tank reports subordinate research to the goal of making arguments for policies that reflect the ideology of the sponsoring organization.
Yet, while they may provide little or no value as research, advocacy reports can be very effective for a different purpose: they can influence policy because they are often aggressively promoted to the media and policymakers.
To help the public determine which elements of think tank reports are based on sound social science, NEPC’s “Think Twice” Think Tank Review Project has, every year since 2006, asked independent experts to assess strengths and weaknesses of reports published by think tanks.
Few of the think tank reports have been found by experts to be sound and useful; most, however, are found to have little, if any, scientific merit. At the end of each year NEPC editors sift through the reviewed reports to identify the worst offender. We then award the organization publishing that report NEPC’s Bunkum Award for shoddy research.