- Ed Review
- Think Tank Reviews
Policymakers Should Disregard New School-District Consolidation Report
Consolidation of Schools and Districts: What the Research Says and What it Means
URL for this press release: http://tinyurl.com/lkxxbem
BOULDER, CO (August 26, 2013) –The Center for American Progress (CAP) recently released a report intended “to spark a conversation” about closing and consolidating small school districts. The report, titled “Size Matters,” concludes: “Across the nation, we found that small, nonremote districts might represent as much as $1 billion in lost annual capacity” (p. 2). Unfortunately, two major problems fatally undermine the report’s usefulness: it makes limited and misleading use of existing research, and its own problematic analysis results in misguided recommendations.
The extensive research in this important area was summarized in a 2011 policy brief authored by Craig Howley, Jerry Johnson, and Jennifer Petrie and published by the National Education Policy Center. That 2011 brief concluded that, while the merits of consolidation should be examined on a case-by-case basis (a position also adopted by the CAP report), consolidations are “unlikely to be a reliable way to obtain substantive fiscal or educational improvement.”
Since the beginning of the Great Recession in 2008, many states have proposed eliminating school districts as a way of realizing savings. However, past waves of consolidations have already captured most potential efficiency and educational gains, so further systemic consolidation could prove counter-productive. For instance, decades of cost analyses demonstrate a U-shaped relationship between district size and costs, with inefficiencies in large districts as well as small districts. The CAP report, however, ignores the source of most of the size-related inefficiencies which are found in very large school districts (mostly in metropolitan areas) rather than in very small districts. Although very small districts are numerous, they enroll far fewer students, so any savings will be quite limited. In very large districts, inefficiencies can pack a much stronger punch.
The use of related research is haphazard at best. Of the 52 endnotes in the CAP report, only five may reasonably be considered as citing accepted scholarship. The report does cite the work of one important and well-regarded research team, but it misuses that work by ignoring substantial evidence that smaller schools and districts improve the achievement prospects in impoverished places, rural and urban.
Sound use of research may also have saved the report from its frequent use of general, ungrounded appeals to the education of children. Data exist about the relationship of school and district consolidations to achievement, graduation rates and children in poverty. The omission of solid research on these topics is a fundamental oversight.
Moreover, even if CAP’s hypothesized savings (“as much as $1 billion”) for small districts were realized, they would be less than one-fifth of one percent of education spending. This is not surprising. As the report notes, “. . . central administration is typically less than two percent of budgets” (p. 10) – there is only minimal potential for greater efficiency at the central office level.
How did the report conclude that (nationwide) consolidating of small districts would likely yield “savings” of $1 billion? As explained below, the study’s methodological flaws mean that the projection is likely an overestimate, perhaps a large one.
One key problem is that the report calculates savings for all states based on estimates of what it would cost for a district of 1,000 students to meet standards. That is, the study’s national projections are extrapolated from calculations based not on actual spending, but instead on estimates of costs that represent a level of spending thought necessary for students to meet standards. The estimates of such costs came from educational adequacy studies conducted in just four states: Connecticut, Kansas, Montana, and Pennsylvania. Use of these estimates to judge efficiency in all 50 states is unsound.
First, these are estimates of hypothetical and idealized expenditures (rather than real expenditures). The ‘reality’ being studied, in other words, doesn’t really exist. More particularly, cost-estimates produced by adequacy studies average 25% to 30% higher than actual spending, reflecting the nation’s pervasive under-investment in education.
Second, the US does not have a national system, but 50+ state and territorial systems. The norms of funding and expenditure in each of these systems varies widely. Constructing a national standard using small professional judgment panels from four states cannot result in a sensible national estimate. The report’s projections of cost-savings are at best conjectural.
The report is also hampered by vague claims such as “researchers have known that size matters” and “schools must reform school management systems.” Similarly, statements that one size does not fit all, that dollars must be well-spent, and that certain support systems can be regionalized are laudable (and are often already reflective of practice) and would probably be broadly supported. Yet the lack of specificity reduces these conclusions to aphorisms. The report also fails to recognize or account for the extensive regionalization of any number of fiscal and pedagogical support systems that already exist.
In the end, the new report adds nothing to our knowledge and provides little guidance to policymakers. Indeed, it could mislead them into investing in proposals that have trifling economic or educational benefits.
Find the CAP report, “Size Matters: A Look at School-District Consolidation” at:
Find the NEPC review of consolidation research at: