Skip to main content

Charter and Regular Public School Performance in "Ohio 8" Districts, 2010-11

Every year, the state of Ohio releases an enormous amount of district- and school-level performance data. Since Ohio has among the largest charter school populations in the nation, the data provide an opportunity to examine performance differences between charters and regular public schools in the state.

Ohio’s charters are concentrated largely in the urban “Ohio 8” districts (sometimes called the “Big 8”): Akron; Canton; Cincinnati; Cleveland; Columbus; Dayton; Toledo; and Youngstown. Charter coverage varies considerably between the “Ohio 8” districts, but it is, on average, about 20 percent, compared with roughly five percent across the whole state. I will therefore limit my quick analysis to these districts.

Let’s start with the measure that gets the most attention in the state: Overall “report card grades.” Schools (and districts) can receive one of six possible ratings: Academic emergency; academic watch; continuous improvement; effective; excellent; and excellent with distinction.

These ratings represent a weighted combination of four measures. Two of them measure performance “growth,” while the other two measure “absolute” performance levels. The growth measures are AYP (yes or no), and value-added (whether schools meet, exceed, or come in below the growth expectations set by the state’s value-added model). The first “absolute” performance measure is the state’s “performance index,” which is calculated based on the percentage of a school’s students who fall into the four NCLB categories of advanced, proficient, basic and below basic. The second is the number of “state standards” that schools meet as a percentage of the number of standards for which they are “eligible.” For example, the state requires 75 percent proficiency in all the grade/subject tests that a given school administers, and schools are “awarded” a “standard met” for each grade/subject in which three-quarters of their students score above the proficiency cutoff (state standards also include targets for attendance and a couple of other non-test outcomes).

The graph below presents the raw breakdown in report card ratings for charter and regular public schools.

The distributions are quite similar, though there is a higher concentration of charter schools in the “academic emergency” and “academic watch” categories, while regular public schools “make up” most of the difference in the “effective” and “continuous improvement” ratings.

As always, however, schools can differ in the characteristics of their students (and failure to account for these differences can be misleading). We might attempt to partially address these differences by fitting models that control for the following variables: The percent of students who are economically disadvantaged; percent special education; percent LEP; percent minority; and district fixed effects. Since so few schools receive the “excellent with distinction” designation, I collapse that into “excellent.” There are more details about the models in the tables’ stubs.

In short, there are significant differences between charters and regular public schools in the likelihood that they receive different ratings, even controlling for the student characteristics mentioned above. To make things simpler, let’s take a look at how “being a charter school” affects the predicted probability of receiving ratings using three different “cutoff” points: The odds of schools receiving the rating of “continuous improvement” or better; “effective” or better; and “excellent” or better. The graph below represents the change in probability for charter schools.

The difference between the two types of schools in the probability of receiving “excellent” or better (-0.02, or two percent) is small and not statistically significant. The other two differences, on the other hand, are both large and significant. Charter schools are 13 percent less likely to receive a rating of “effective” or better, and they are 22 percent less likely to receive “continuous improvement” or better.

These results remain virtually unchanged if I exclude “digital” (online) charter schools from the analysis.

We might decompose these differences by taking a look at the charter/regular public schools differences in terms of the four separate measures that comprise the composite ratings – value-added, AYP, the state performance index, and the percent of standards met. Using the same models as for the report card grades (models designed for categorical outcomes), I find no difference between charter schools and regular public schools in the likelihood of receiving any of the three value-added ratings, which are “met expectations,” “above expectations,” and “below expectations.” Any differences between charter and regular public schools in the raw tabulation of value-added outcomes are probably just statistical noise.

(Side note: The student characteristics variables aren’t very useful in explaining the differences between schools in their value-added ratings, which is to be expected, since the value-added models are designed to control for these differences.)

Similarly, again controlling for student characteristics, there are no charter/regular public school differences in terms of the likelihood of making AYP. It seems that the differences between the two types of schools in the report card grades are not driven by discrepancies in the two “growth-oriented” measures that comprise those ratings (although the fact that these two measures are categorical, rather than numerical indexes, limits the degree to which schools can be differentiated from one another).

There are, on the other hand, substantial differences between charters and regular public schools in terms of the two absolute performance measures: The state performance index and the percent of standards met.

These results are presented in the graph below.

Controlling for the other variables in the model, the percent of state standards that charter schools meet are just under 14 percentage points lower than that of regular public schools, while charters’ performance index scores are about 6.5 percentage points lower. Both of these differences are both educationally meaningful and statistically significant at any conventional level.

Needless to say, these are simple school-level models. They mask a great deal of underlying variation, both within and between schools (it’s also worth noting that the two “absolute” performance measures are proficiency-based, and so they transmit limited information about the distribution of actual scores). These results do not prove a causal effect of charter schooling. They are only associations, and have limited policy relevance, since they don’t tell us why particular charter schools perform better or worse than comparable regular public schools.

That said, it seems that there were significant differences between the two types of schools in the “Ohio 8” districts in 2010-11. Specifically, charter schools were less likely than regular public schools to receive the higher report card ratings, and this appears to have been a result of charters’ lower performance on the two measures that gauge absolute performance levels, whereas there were no discernible differences in terms of the two growth-oriented measures. In other words, within “Ohio 8” districts, charter and regular public schools are equally likely, all else being equal, to be making progress, but regular public school students perform at a higher level than their charter school counterparts.

These results square with previous (more sophisticated) analyses of charter schools in Ohio (see here and here), which generally find that they either underperform or perform roughly on par with comparable regular public schools.

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Matthew Di Carlo

Matthew Di Carlo is a senior research fellow at the non-profit Albert Shanker Institute in Washington, D.C. His current research focuses mostly on education polic...