Skip to main content

What Does the New York City Charter School Study from CREDO Really Tell Us?

With the usual fanfare, we were all blessed last week with yet another study seeking to inform us all that charteryness in-and-of-itself is preferential over traditional public schooling – especially in NYC! In yet another template-based pissing match (charter vs. district) design study, the Stanford Center for Research on Educational Outcomes provided us with aggregate comparisons of the estimated academic growth of a two groups of students – one that attended NYC charter schools and one that attended NYC district schools. The students were “matched” on the basis of a relatively crude set of available data.

As I’ve explained previously in discussing the CREDO New Jersey report, the CREDO authors essentially make do with the available data. It’s what they’ve got. They are trying to do the most reasonable quick-and-dirty comparison, and the data available aren’t always as precise as we might wish them to be. But, this is also not to say that supposed Gold Standard “lottery-based” studies are all that. The point is that doing policy research in context is tricky, and requires numerous important caveats about the extent to which stuff is, or even can be truly randomized, or truly matched.

The new CREDO charter study found that children attending charters outpaced their peers in district schools in math (significantly) and somewhat less so in reading (relatively small difference). Their analysis included six years of data through 2010-11 (meaning that the last growth period included would be 2009-10 to 2010-11).

How does a CREDO study work?

Students are matched with a virtual peer, where one attends a district school and another attends a charter school. The NYC CREDO study matches students on the following bases:

  • Grade-level
  • Gender
  • Race/Ethnicity
  • Free or Reduced Price Lunch Status
  • English Language Learner Status
  • Special Education Status
  • Prior test score on state achievement tests

The CREDO study does not match students by:

  • Their level of free vs. reduced priced lunch, which may be consequential to the validity of the match if the students in the district school sample are more likely to be free lunch than reduced lunch and the charter school sample the opposite.
  • The type or severity of disability, which may be similarly consequential if it turns out that the charter students are less likely to have more severe disabilities.

Prior score should partially compensate for these shortcomings. But, I discussed some of the problems that arise from assuming these matches to be adequate in a previous post. Nonetheless, this is still a secondary issue.

Perhaps the biggest issue here is that the CREDO method makes no attempt to separate the composition of the peer group from the features of the school. That is, it may be the case that some portion – even a large portion – of the effectiveness being attributed to charter schools is merely a function of putting together a group of less needy students.

CREDO School Effect = Peer Effect + School Effect

So who cares? Why is this important? As I’ve explained a number of times in this blog, from a policy perspective the “scalable” portion is the “school effect” or the stuff – educational programs/services/teacher characteristics, etc. that lead to differences in student achievement even if all of the kids were the same (not just the observed/matched child). If the effect is largely driven by achieving a selective peer group, that may be equally valuable for children who have access to this school, but one can only stretch the selective peer group model so far in the context of a high poverty city. It’s not scalable. It’s a policy that necessarily requires advantage a few (in terms of peer group) while disadvantaging others.

What about those peer groups?

Here’s a look at the “relative demographics” of New York City charter schools compared to schools serving the same grade ranges in the same borough.  This figure is derived from data used in a previous report, and being used in a forthcoming study, where we go to great lengths to determine a) the comparability of students, b) the characteristics of teachers, programs and services and c) the comparable spending levels of New York City charter schools and district schools serving similar students of similar grade ranges. Our studies have employed data from 2008-2010, significantly overlapping the CREDO study years.

Figure 1. Relative Demographics of Selected Management Organizations 2008-10

Slide3

Here, we see that compared to same grade level schools in the same borough, NYC charters have in many groups, 10% to 20% fewer children qualifying for free lunch (<130% income level for  poverty), even if they appear to have comparable shares qualifying for free or reduced price lunch (<185% income level for poverty). These groups are substantively different in terms of their educational outcomes.

Further, charters serve a much lower share of children with limited English language proficiency, a finding validated by other authors. And charter schools generally serve much lower shares of children with disabilities (a finding we explored in greater detail here!).

So, while CREDO matched individual students by the crude characteristics above, they did not attempt to separate in their analysis, whether actual school quality factors, or these substantive peer group differences, were the cause of differences in student achievement growth.

Now, we have no idea what share of the growth, if any, is explained by peer effect, but we do know from a relatively large body of research that selective peer effects work both to advantage those selected into the desirable peer group and disadvantage those selected out. That aside, it is conceivable that New York City charter schools are doing some things that may lead to differential achievement growth. In fact, given what we now know from our various studies of New York City charter schools (including peer sorting), I’d be quite shocked and perhaps even disappointed if NYC charters were not able to leverage their various advantages to achieve some gain for students!

In New York City, what are those strategies? [School Effects?]

Let’s start with class size variation. We used data from 2008 to 2010 to determine the average difference in class size between NYC charter school sand district schools serving similar grade ranges and similar student populations. Here’s what we found for 8th grade as an example.

Figure 2. Charter Class Size Difference from Similar District School

Slide4

Now, on to teacher salaries. First, we used individual teacher level data to estimate the salary curve by experience for teachers in NYC charter schools and similar assignments in district schools. Here’s what we found. Charter teachers, who already have smaller class sizes on average, are getting paid substantively more in many cases (in particularly elite/recognized charter management chains).

Figure 3. Projected Teacher Salaries (based on regression model of individual teacher data)

Slide8

But, that pay does come with additional responsibilities, which for students translates to a) longer school years and b) greater individual attention. Here are the contract month differences for NYC charter and district school teachers.

Figure 4. Contract Months

Slide9

Figure 5. Salary Controlling for Months

Slide10

Others have noted that “no excuses” models often provide substantial additional time in terms of length of school year and length of school day (+20% to 30% more time),but most have failed to provide reasonable cost analysis of this additional time.  Here are a few pictures of the comparable spending levels of district and charter schools, for elementary and middle schools by special education share (the strongest predictor of differences in site budgets per pupil in NYC.

Figure 6. Spending per Pupil and Special Education for Elementary Schools.

Slide7

Figure 7. Spending per Pupil and Special Education for Middle Schools.

Slide6

In our more comprehensive report on the topic and in a related forthcoming article, we have found that leading charter management organizations often spend from $3,000 to $5,000 more per pupil in NYC than do district schools serving similar populations.

To Summarize

Okay, so we know that:

CREDO School Effect = Peer Effect + School Effect

And we know that the peer groups into which the “matched” kids were sorted are substantively different from one another and that various school resources are substantively different (despite what some very poorly constructed, very selective analyses might suggest).  It’s certainly possible that BOTH MATTER – and that BOTH MATTER quite a lot. Or at least they should.

Figure 8. The Real Question behind the NYC CREDO Study?

NYC CREDO

Actually, it’s rather depressing that all that additional time, paid for in additional salaries and applied to smaller classes of more advantaged kids couldn’t accomplish an even better gain on reading assessments. That would undermine a lot of what we currently understand about schooling and peer effects.

And the Policy Implications Are?

What’s most important here is how we interpret the policy implications. Certainly, given the wide variation in both district and charter schooling in NYC and substantial differences between them, it would be foolish to assert that any differences found in the CREDO study provide endorsement of charter expansion. That is, provide endorsement of simply adding more schools called charter schools. The study is a study of charter schools that serve largely selective populations and have lots of additional resources for doing so. This by no means provides endorsement that we could just add any old charter schools in any neighborhood and achieve similar results.

It also may be the case that even if we try our hardest to replicate only the good charters, that as charter market share increases in NYC, both the more advantaged students and the access to big money philanthropy starts to run thin. Note that the NYC share of children in charter schools remained under/around 4% during the period studied – a sharp contrast from other states/cities where charter performance has been much less stellar.

An alternative assertion that might be drawn from combining the NYC charter study with our previous studies, is that more students might benefit from being provided additional resources. But scaling up these charter alternatives would not be cheap. Here’s what we found in our comparisons of New York City and Houston:

These findings, coupled with evidence from other sources discussed earlier in this report, paint a compelling picture that “no excuses” charter school models like those used in KIPP, Achievement First and Uncommon Schools, including elements such as substantially increased time and small group tutoring, may come at a significant marginal cost. Extrapolating our findings, to apply KIPP middle school marginal expenses across all New York City middle school students would require an additional $688 million ($4,300 per pupil x 160,000 pupils). In Houston, where the middle school margin is closer to $2,000 per pupil and where there are 36,000 middle schoolers, the additional expense would be $72 million. It makes sense, for example, that if one expects to find comparable quality teachers and other school staff to a) take on additional responsibilities and b) work additional hours (more school weeks per year), then higher wages might be required. We provide some evidence that this is the case in Houston in Appendix D. Further, even if we were able to recruit an energetic group of inexperienced teachers to pilot these strategies in one or a handful of schools, with only small compensating differentials, scaling up the model, recruiting and retaining sufficient numbers of high quality teachers might require more substantial and sustained salary increases.

But, it’s also quite possible that $688 million in New York or $72 million in Houston might prove equally or even more effective at improving middle school outcomes if used in other ways (for example, to reduce class size). Thus far, we simply don’t know.

As I noted in a previous post, it’s time to get beyond these charter vs. district school pissing match studies and seek greater precision in our comparisons and deeper understanding of “what works” and what is and isn’t “scalable.”

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Bruce D. Baker

Bruce Baker is Professor and Chair of the Department of Teaching and Learning at the University of Miami.&nbsp;Professor Baker is widely recognized as the nation’...