Skip to main content

Shanker Blog: The Persistent Misidentification of “Low Performing Schools”

In education, we hear the terms “failing school” and “low-performing school” quite frequently. Usually, they are used in soundbyte-style catchphrases such as, “We can’t keep students trapped in ‘failing schools.’” Sometimes, however, they are used to refer to a specific group of schools in a given state or district that are identified as “failing” or “low-performing” as part of a state or federal law or program (e.g., waivers, SIG). There is, of course, interstate variation in these policies, but one common definition is that schools are “failing/low-performing” if their proficiency rates are in the bottom five percent statewide.

Putting aside the (important) issues with judging schools based solely on standardized testing results, low proficiency rates (or low average scores) tell you virtually nothing about whether or not a school is “failing.” As we’ve discussed here many times, students enter their schools performing at different levels, and schools cannot control the students they serve, only how much progress those students make while they’re in attendance (see here for more).

From this perspective, then, there may be many schools that are labeled “failing” or “low performing” but are actually of above average effectiveness in raising test scores. And, making things worse, virtually all of these will be schools that serve the most disadvantaged students. If that’s true, it’s difficult to think of anything more ill-advised than closing these schools, or even labeling them as “low performing.” Let’s take a quick, illustrative look at this possibility using the “bottom five percent” criterion, and data from Colorado in 2013-14 (note that this simple analysis is similar to what I did in this post, but this one is a little more specific; also see Glazerman and Potamites 2011Ladd and Lauen 2010; and especially Chingos and West 2015).

First, one quick clarification: This analysis is not about Colorado per se. It is only using Colorado data to illustrate a larger point. For the purposes of formal accountability policy (e.g., SIG, ESEA waivers, etc.), different states identify “failing schools” or similar categories in different ways (e.g., for a review of identification in ESEA waivers, see Polikoff et al. 2014). In all states, absolute performance (proficiency rates for elementary/middle schools, often mixed with graduation rates for high schools) is the primary driver of which schools get identified. But a number of states and programs do make an effort to include growth measures in their identification schemes (or, sometimes, pseudo-growth, such as proficiency rate changes). So, I’m using Colorado data, but the conclusions are about one common way of identifying low-performing schools, not about any particular state.

That said, let’s begin with the distribution of median growth percentiles (MGPs) for all Colorado schools, regardless of their proficiency rates, starting with reading. This will give us a frame of reference for looking at the results for the schools in the bottom five percent. In the graph below, each bar represents the proportion of all schools (about 2,000 of them in total) receiving reading growth scores within each MGP bin. For example, looking at the highest bar, about 21-22 percent of schools receive a growth score between 46-50.

We have a nice, normal-looking distribution here, which is to be expected, since the growth models are designed that way. Now that we have our comparison graph, let’s see the same graph, but limit the results to schools with 2013-14 reading proficiency rates in the bottom five percent statewide (about 75 schools).*

The distribution is no longer bell-shaped (which is to be expected, since we’ve isolated a subset of schools), and you can see that it is further over to the left compared with the graph of all schools. In other words, the average MGP of the “failing schools” (those with proficiency rates in the bottom five percent statewide) is a bit lower than it is for all schools combined (to give you an idea, the specific figures are 42 versus 50).

You’ll also notice, however, that many of these “failing schools” do not receive low MGPs in 2013-14, and a bunch actually scored pretty well. For example, one in five receives an MGP above the statewide median.**

Now, let’s quickly perform the same exercise for math scores, beginning with the distribution for all schools.

The distribution is a little shorter and fatter than for reading, but, as expected, nice and bell-shaped. Now, here it is for the schools with the lowest proficiency rates in math.

Again, we see that the distribution for so-called “low performing schools,” compared with that of all schools, is further over to the left – i.e., MGPs are lower among these schools compared to all schools. Nevertheless, 12 percent of these low-proficiency schools have MGPs above the statewide median. Once again, the MGPs suggest that schools in the bottom five percent in terms of proficiency vary in their estimated effectiveness in boosting test scores. A disproportionate number, vis-à-vis the entire state, received low MGPs, but many also score fairly well.

Overall, then, by this entirely test-based standard sometimes applied in the identification of “failing schools” or “low-performing schools,” as many as one in five schools identified is actually estimated to compel growth at levels above the median, and a fairly large chunk of the remaining schools have growth estimates that I would characterize as respectable. This is not to say that these schools cannot or should not improve their test-based effectiveness. But they should not be labeled “failing” or “low performing.” They are not.

And these basic results are not at all surprising. You will find the same situation when comparing growth models and status measures in any state, regardless of the criterion for identification. Yet the “failing or low performing schools” label, as defined above, permeates school accountability policy, and the debate surrounding that policy, in the U.S.

Now, to reiterate, many states and programs factor in other measures, but more than a few do not. And even those that do are still driven largely by status indicators such as proficiency rates. Proficiency rates or average scores are, for example, useful when you want to direct resources at schools serving struggling students. States and districts might even factor in these status measures when identifying low-growth schools for intervention (see this proposal).

But, even in the narrow context of testing outcomes, there is a big difference between a low performing school and a school that serves relatively low performing students.

So, whenever you hear the endless (though often well-intentioned) rhetoric about the need for policies to address “failing schools,” no matter what you think should be done about them (and there is plenty of disagreement on that), bear in mind that the first essential step is to identify these schools properly (hopefully using multiple types of outcomes). And, more often than not, we as a nation have not done a good job of it.

- Matt Di Carlo

*****

* Many of these schools are high schools. But the basic conclusions of this analysis do not change if it is limited to elementary and middle schools.

** Colorado’s growth model, which is aptly named “The Colorado Growth Model,” is among those that have been shown to be moderately associated with absolute performance (e.g., proficiency rates). Therefore, the simple analysis here may understate the proportion of “failing schools” that have strong growth scores, vis-à-vis how things would look under a different model.

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Matthew Di Carlo

Matthew Di Carlo is a senior research fellow at the non-profit Albert Shanker Institute in Washington, D.C. His current research focuses mostly on education polic...