Skip to main content

School Finance 101: School Funding Myths & Misdirects

There exist a handful commonly cited bodies of evidence and deceitful smokescreens intended to undermine the importance of equitable and adequate financing for schools. Here’s my abbreviated rebuttal sheet to what I call the School Money Myths & Misdirects:

First, many including Eric Hanushek assert that school spending has climbed for decades but test scores have remained “virtually flat.”[1]  Others have countered, however, that in fact test scores have not remained flat, especially when accounting for changes to the student population.[2] Still others have pointed out the fallacious logic of this argument noting that “between 1960 and 2000 the rate of cigarette smoking for females decreased by more than 30 percent while the rate of deaths by lung cancer increased by more than 50 percent over the same time period” seemingly implying that smoking cessation increases lung cancer, if one applies the same flawed reasoning.[3] I rebut the overall trends for student outcomes and spending in a recent blog post.

Second, many point to (supposed) high spending of the United States and relatively low scores on international assessments as evidence that spending in the U.S. in particular seems unrelated to school quality. Like the long term trend argument, this argument mischaracterizes U.S. students’ performance.[4] It also relies on very poor, not cross-nationally comparable school spending figures, while failing to consider a host of intervening factors.[5] The most thorough rebuttal of this claim can be found in a recent report I wrote with Mark Weber.

Third, in 1986, Eric Hanushek produced the first in a series of “vote count” meta-analyses wherein he tallied the cases in which research studies found positive, negative or non-significant correlations between school resource measures and student outcomes. Finding mixed results, Hanushek concluded “There appears to be no strong or systematic relationship between school expenditures and student performance.”(p. 1162)[6] This claim became a mantra for those denying the connection between spending and school quality. Soon thereafter other researchers applied quality standards to filter existing studies, finding that the preponderance of higher quality studies in fact did find positive correlations. [7] But these studies pale in comparison in both methodological rigor and relevance to more recent longitudinal studies which consistently find positive effects of school finance reforms on student outcomes.[8]  A thorough review of this literature is available in my Shanker Institute report – Does Money Matter in Education.

Fourth, Hanushek and others also continue to rely on anecdotal claims of massive spending increases in Kansas City, Missouri and in the state of New Jersey which failed to lead to any substantive improvement in student outcomes.[9] The Kansas City claims most often mischaracterize the amount, duration and context (desegregation order) of the funding.[10]  The New Jersey claims are most conveniently rebutted by Hanushek himself. While in the context of several recent school funding legal challenges Hanushek has asserted “Compared to the rest of the nation, performance in New Jersey has not increased across most grades and racial groups,” [11] his own more recent work has found:  “The other seven states that rank among the top-10 improvers, all of which outpaced the United States as a whole, are Massachusetts, Louisiana, South Carolina, New Jersey, Kentucky, Arkansas, and Virginia.”[12]

Fifth and finally, two arguments that frequently resurface are that:

  1. a) how money is spent matters more than how much; and
  2. b) student backgrounds matter much more than schools and money.

While the assertion that “how money is spent is important” is certainly valid, one cannot reasonably make the leap to assert that how money is spent is necessarily more important than how much money is available. Yes, how money is spent matters, but if you don’t have it, you can’t spend it. Further, those who have more of it, have more latitude in determining how to use it.

The second assertion misses the point entirely. The assertion that student background is more strongly associated with student outcomes than are school resource measures is valid. That finding can either be used as a misdirect, to convince the public that there’s no sense trying to leverage  resources to mitigate these disparities or that statement can be viewed as a challenge to be overcome in part through well-crafted state school finance policy and resource allocation. In fact it is precisely because student backgrounds matter so much in determining outcomes that we must figure out how to leverage resources best to offset disadvantages created disparities in backgrounds. Because disparities in student backgrounds are so substantial, the costs of offsetting those disparities can be substantial.[13]


[1] Hanushek, E. (2015) Money Matters After All? Education Next

[2] Rothstein, R. (2011). Fact-Challenged Policy. Policy Memorandum# 182. Economic Policy Institute.

[3] Jackson, K., Johnson, R.C., Persico, C. (2015) Money Does Matter After All. Education Next.

[4] Carnoy, M., & Rothstein, R. (2013). What do international tests really show about US student performance. Economic Policy Institute, 28.

[5] Baker, B. D., & Weber, M. (2016). Deconstructing the Myth of American Public Schooling Inefficiency.

[6]E. A. Hanushek, “Economics of Schooling: Production and Efficiency in Public Schools,” Journal of Economic Literature 24, no. 3 (1986): 1141-1177. A few years later, Hanushek paraphrased this conclusion in another widely cited article as “Variations in school expenditures are not systematically related to variations in student performance.” E. A. Hanushek, “The Impact of Differential Expenditures on School Performance,” Educational Researcher 18, no. 4 (1989): 45-62. Hanushek describes the collection of studies relating spending and outcomes as follows: “The studies are almost evenly divided between studies of individual student performance and aggregate performance in schools or districts. Ninety-six of the 147 studies measure output by score on some standardized test. Approximately 40 percent are based upon variations in performance within single districts while the remainder look across districts. Three-fifths look at secondary performance (grades 7-12) with the rest concentrating on elementary student performance” (fn #25).

[7]Greenwald and colleagues explain: “Studies in the universe Hanushek (1989) constructed were assessed for quality. Of the 38 studies, 9 were discarded due to weaknesses identified in the decision rules for inclusion described below. While the remaining 29 studies were retained, many equations and coefficients failed to satisfy the decision rules we employed. Thus, while more than three quarters of the studies were retained, the number of coefficients from Hanushek’s universe was reduced by two thirds” (p. 363). Greenwald and colleagues further explain that: “Hanushek’s synthesis method, vote counting, consists of categorizing, by significance and direction, the relationships between school resource inputs and student outcomes (including but not limited to achievement). Unfortunately, vote-counting is known to be a rather insensitive procedure for summarizing results. It is now rarely used in areas of empirical research where sophisticated synthesis of research is expected” (p. 362).
Hanushek (1997) provides his rebuttal to some of these arguments, and Hanushek returns to his “uncertainty” position: “The close to 400 studies of student achievement demonstrate that there is not a strong or consistent relationship between student performance and school resources, at least after variations in family inputs are taken into account” (p. 141). E. A. Hanushek, “Assessing the Effects of School Resources on Student Performance: An Update,” Educational Evaluation and Policy Analysis 19, no. 2 (1997): 141-164. See also E. A. Hanushek, “Money Might Matter Somewhere: A Response to Hedges, Laine and Greenwald,” Educational Researcher 23 (May 1994): 5-8.

[8] Jackson, C. K., Johnson, R. C., & Persico, C. (2015a). The effects of school spending on educational and economic outcomes: Evidence from school finance reforms (No. w20847). National Bureau of Economic Research.

Jackson, C.K., Johnson, R.C., & Persico, C. (2015b) Boosting Educational Attainment and Adult Earnings. Education Next.

Lafortune, J., Rothstein, J., Schanzenbach, D.W. (2015) School Finance Reform and the Distribution of Student Achievement. Working Paper. University of California at Berkeley.

Candelaria, C., Shores, K. (2017) Court Ordered Finance Reforms in the Adequacy Era: Heterogeneous Causal Effects and Sensitivity. Working Paper

[9] Baker, B., & Welner, K. (2011). School finance and courts: Does reform matter, and how can we tell. Teachers College Record, 113(11), 2374-2414.

[10] Baker, B., & Welner, K. (2011). School finance and courts: Does reform matter, and how can we tell. Teachers College Record, 113(11), 2374-2414.

[11] (Gannon v. Kansas)

[12]E. A. Hanushek, P. E. Peterson and L. Woessmann, “Is the US Catching Up: International and State Trends in Student Achievement,” Education Next 12, no. 4 (2012): 24,

[13] Duncombe, William, and John Yinger. “How much more does a disadvantaged student cost?.” Economics of Education Review 24, no. 5 (2005): 513-532.

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Bruce D. Baker

Bruce D. Baker is a Professor in the Graduate School of Education at Rutgers, The State University of New Jersey, where he teaches courses in school finance polic...