The Charter School Authorization Theory
Anyone who wants to start a charter school must of course receive permission, and there are laws and policies governing how such permission is granted. In some states, multiple entities (mostly districts) serve as charter authorizers, whereas in others, there is only one or very few. For example, in California there are almost 300 entities that can authorize schools, almost all of them school districts. In contrast, in Arizona, a state board makes all the decisions.
The conventional wisdom among many charter advocates is that the performance of charter schools depends a great deal on the “quality” of authorization policies – how those who grant (or don’t renew) charters make their decisions. This is often the response when supporters are confronted with the fact that charter results are varied but tend to be, on average, no better or worse than those of regular public schools. They argue that some authorization policies are better than others, i.e., bad processes allow some poorly-designed schools start, while failing to close others.
This argument makes sense on the surface, but there seems to be scant evidence on whether and how authorization policies influence charter performance. From that perspective, the authorizer argument might seem a bit like tautology – i.e., there are bad schools because authorizers allow bad schools to open, and fail to close them. As I am not particularly well-versed in this area, I thought I would look into this a little bit.
Answering the question of how authorization policies influence results is a difficult task to say the least. Not only do you have to assess school performance (say, test-based value-added), which is a haul by itself, but then you have to see whether the estimated effects are associated with different types of authorization policies, preferably across a bunch of different locations over a fairly long period of time. And, in such an analysis, the findings would have to be interpreted cautiously, since it would be virtually impossible to separate the effect of authorization policies from all the other factors at work. There are a lot of unobservable factors here, including, most basically, the judgment of the people making the decisions.
Perhaps as a result, this type of work seems to be quite rare (if I’m missing some, which is quite possible, please leave a comment).*
One very limited exception is the well-known CREDO study, which finds that charter performance is lower by a fairly sizeable margin in states that allow multiple entities to authorize charter schools. The interpretation here is that applicants can fish for the “weakest link,” and find the entity willing to approve their plan. There is also a significant, but much smaller, association between higher performance and states’ allowing operators to appeal decisions on their schools’ opening or being renewed.
Since, however, arguments about the importance of authorization are so prevalent, I wanted to take a stab at some additional examination (though it’s almost a blind stab, as it’s about as rough and simplistic as it can possibly be).
For a performance measure, I’m also using CREDO’s results, as they are at least somewhat comparable between states – i.e., they use the same methods for the analysis in each state (though keep in mind that the CREDO estimates do not reflect absolute charter [test-based] performance, but rather that relative to comparable regular public schools).
The next step was to choose a bunch of different authorizer policies that are considered important. For this, I relied on a report from the National Association of Charter School Authorizers (NACSA). Their results, which they collected via surveys, detail whether or not states have policies that embody “essential practices” that NACSA considers important for ensuring quality, including those related to the application process (e.g., timelines, interviews), how decisions are made (e.g., renewal/revocation criteria) and how schools are managed while open (e.g., audits, renewal terms).**
Because I only have CREDO results across whole states, I had to exclude from my “analysis” those that allow multiple authorizers, leaving me with eight states.***
Please note: I cannot possibly overstate the degree to which this “analysis” (and I use that term loosely) is purely illustrative, and can’t even be used to draw tentative conclusions about the impact of charter authorizing policies. There are literally dozens of reasons why this is the case, too many to carry out even a broad strokes discussion. The idea is just to see if there are any associations worth looking into further.
The first way to look at these data is to examine the relationship between states’ charter performance and their overall score on NACSA’s index of “essential practices.” The index simply tells you how many of the “essential practices” are used by each state. In theory, the index can range from 0-12, but, for most states (and all of those in my data), states received scores between 9-12.
Since there are so few states, I’m not comfortable doing anything more complicated than looking at them individually (keep in mind that I’m ignoring statistical significance in viewing these estimates – some of the smaller ones are not discernibly different from zero). So, in the graph below, each dot is a state, and there are separate sub-graphs for math and reading.
In general, among the nine states in my analysis, the largest estimated charter “effects” (according to CREDO) are found in states with the second lowest total index (10). Charter performance appears somewhat low among states scoring at the two extremes of the index (9 and 12 points), though both include only one and two states, respectively. So, there’s really not much to discuss here as far as a relationship between charter performance and NACSA’s index of “essential practices,” at least in these eight states (though, again, that’s not to say there isn’t one).
But perhaps the more useful examination is to see whether specific practices, rather than overall indexes, are associated with higher or lower performance. Unfortunately, there was little or no variation among these nine states in most of the “essential practices” (i.e., all or virtually all of them answered “yes”). Actually, in the case of a few of NACSA’s best practices, there isn’t much variation across all authorizers. This is partially due to the fact that the policies are coded dichotomously – yes or no – but it’s still important to note.
It’s not surprising that best practices are used by most authorizers – that’s how it should be – and it may very well be the case that these practices improve the quality of approved/renewed schools. But, put simply, authorization policies can’t explain variation in charter performance if they themselves don’t vary much. This might suggest the need for more nuanced data on these practices.****
In any case, there are really only three practices that vary enough among my eight states (there are at least a couple of states in both the “yes” and “no” categories) to make a separate examination even possible.
Let’s start with whether states require schools to renew their charters every five years (or longer, but only if they are subject to a performance review every five years). The logic here is that requiring operators to renew every five years ensures that they are continually held accountable for their results, but that they are also given sufficient time to establish themselves and collect enough data to assess their performance.
The graph below is the same as the one above, except this time, the horizontal axis shows whether or not states mandate five year terms (at least as determined by NACSA).
The distribution is somewhat even – there’s a spread of estimated charter effects in both subjects, in states that do and do not have this term requirement. But the fact that there are three “yes” states with positive charter “effects” in the math sub-graph might be noteworthy. Overall, it’s something, perhaps worth looking into (I’d prefer to see the relationship between performance and a continuous measure of term length).
Next, here’s the graph for a second policy – whether or not states use an external review panel for assessing applications to open charter schools. The idea that this is an “essential practice” would be that experts familiar with the issues surrounding the opening and management of schools would contribute to states’ ability to ensure that only the best applications and schools are accepted/renewed. That makes good sense.
The two states that don’t require five-year terms – New York (City) and Arkansas – are estimated by CREDO to have positive charter effects, while there’s a pretty even spread among the states that do maintain this requirement. Again, there’s really not much indication here that review panels are associated with the estimated impacts in these eight states, though the maldistribution (only two states in the “no” column) severely limits this comparison, and it seems difficult to deny that this policy is anything but a good idea.
Let’s finish with one of the most oft-discussed policies these days – whether or not authorizers maintain clear criteria for revoking a school’s charter (i.e., closing it down). In short, the logic is that states with clear criteria will be more likely to close down low-performing or other dysfunctional charters, thereby boosting aggregate performance (though, in the case of academic performance-related criteria, I would be as concerned about the validity as the clarity, to say nothing about the complications in determining whether closure is itself a good idea).
Again, charter performance is higher in the two states (one of them is again New York City, which is on top of Louisiana in the reading graph) that don’t have clear revocation criteria, whereas states that do maintain clarity in their revocation standards (at least according to NACSA) are pretty evenly distributed in both subjects. There’s nothing here to build on.
Overall, then, with the possible exception of term length (which is, it’s worth noting, the only that varies much at all), there doesn’t seem to be much as far as consistency in the relationship between charter performance and either the overall index scores or their constituent “essential practices” in these nine states. But that could easily be due to the inadequacy of my data and “analysis.” It will take a lot more than the results above to reach even tentative conclusions about these associations.
So, I’m hoping to see more of this type of examination (hopefully done better and more thoroughly than above). No doubt charter supporters and researchers will disagree on the importance of various authorizer policies (many of which may not be in NACSA’s index), and this would seem to be an important first step – identifying the policies that we should expect to be associated with charter performance, and then testing those hypotheses to the degree that data permit. Given my lack of knowledge in this area, I cannot provide an overview of which practices – in addition to or including those used by NACSA – are generally considered critical. I would encourage others to take a look at how their preferred policies compare with results, test-based and otherwise.
I think one of the more feasible (albeit still limited) approaches is looking at inter-authorizer variation in performancewithin states – that is, whether there is any systematic performance variation (e.g., school-level value-added), all else being equal, between schools authorized by different entities, and then checking if there are any shared practices among those authorizers that seem to be getting results (this has been done in studies looking at characteristics associated with performance of charter schools and CMOs – it’s a bigger stretch with authorizers, but it might yield useful results).*****
In any case, if charter supporters want to put forth the (clearly plausible) argument that authorization policies are one means by which the varied but generally average performance of charters overall can be explained (and improved), these are the types of associations that need to be examined, even if such analysis will require careful interpretation. In the meantime, as is often the case, arguments about authorization policies are to some degree just compelling speculation.
- Matt Di Carlo
*****
* There is also some evidence that certain types of operators seem to get worse results (see this paper finding lower performance among Ohio charters operated by non-profits, and this one reaching the same conclusion about local districts and virtual schooling organizations). This is related to but doesn’t quite tell us what we want to know – whether the “quality” of concrete authorization policies, which are used to open, manage and close schools, and not just the characteristics of entities, helps to explain the variation in their performance. That said, if schools approved by certain types of authorizers (or individual authorizers regardless of type) are shown to get systematically worse results, this might be policy-relevant information.
** Needless to say, the designation of these policies as “essential” is a judgment call, but I had to start somewhere, and NACSA is the national organization for authorizers. In addition, it’s possible that states have changed these policies since the years in which CREDO has testing data. I cannot find any NACSA reports from prior years that provide a state-by-state breakdown, so I must simply tolerate this coding error.
*** I assembled an exceedingly simple dataset that compared each state’s CREDO charter performance measure with the NACSA “essential practices” (each coded as “yes” or “no”). However, most states allow different entities (mostly districts) to authorize charter schools. For instance, in Florida, there are many different authorizers – districts, universities, etc. – all of which have different policies. I did retain two states (LA and PA) where there was an overwhelming majority of charter schools run by the same authorizer (e.g., Louisiana’s state board). Finally, I should note that it’s possible that the relationship between policies and performance in these eight states is different than in others allowing multiple authorizers.
**** The lack of variation in these policies among my eight states is not quite representative. For example, there’s at least some inter-authorizer variation in California, which has almost 300 authorizers. The vast majority of authorizers score eight or higher on the index, but the configurations of their “essential practices” do differ.
***** I have argued on several occasions that variation in charter performance seems strongly associated with school-level characteristics such as extended time, discipline policies, tutoring funding and other factors. There is some limited but growing empirical support for this viewpoint (see this policy brief), but it is not necessarily in conflict with the authorizer argument. In fact, one might argue that authorization could exploit these associations.
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.