As I’ve said a few times, the only people who I really consider enemies in the education debates are those who willfully distort or ignore data to reinforce prior held beliefs. Sadly, I’m beginning to realize that even with that narrow definition, my list of allies grows slimmer by the day. I often respect Mike Petrilli (VP of the Fordham Institute) because he is very clear about his vision for school reform and potential biases. But a recent article he wrote for Education Next called “The Problem with Proficiency” may be one of the most blatant acts of intellectual dishonesty I’ve seen in a while, and I don’t quite get why it’s not a bigger deal.
Petrilli’s piece makes the very obvious and oft-stated point that proficiency data does not tell the whole story, since schools receive students from varying achievement levels. What matters is growth. To be fair, we do still talk too much about proficiency data, but Petrilli neglects to mention the difficulties that can come with measuring growth, how to take into account student demographic information without making whatever assessment formula policymakers are left with an obtuse and seemingly arbitrary algorithm. Why is his discussion so undeveloped?
By the end of the piece, anyone would realize Petrilli’s article really isn’t about the problem with proficiency rates. Rather, it should be titled “The Problem with Proficiency Rates When it Makes Schools I Support Look Bad.” See, Mike is a big believer in Democracy Prep, a fast-expanding charter school in Harlem. It’s not necessarily a bad school–I taught there at institute and was impressed by most of what little I saw. They’ve posted strong results on standardized tests and external evaluations of growth.
However, when the results of new common core assessments, which measure similar (but not exactly the same) skills as previous test regimes, but do so in an *often* higher-rigor way, were released, Democracy Prep did not look so good. I’ve been looking at the data NY released, and noticed that Democracy Prep’s average scores fell by more than 25% more than the average charter school (See table 1,2).
Keep in mind, what we’re talking about here isn’t just proficiency rates, but changes in average scores. Not only that, but changes in average scores in light of a new, generally agreed upon more rigorous assessment. But Petrilli doesn’t reflectively assess one of his favorite charter schools. Even in a loving way. Instead, he simply quotes the founder of Democracy Prep’s talking points. I’m not going to restate them, but they essentially say that (1) since Democracy Prep is a middle school, it’s unfair to compare sixth grade scores to 3rd grade scores because middle schools have only a few months to get those 6th graders up to grade level, while elementary schools have almost three years. (2) Growth matters most, and their school still has very high growth rates, and this shows with their seventh and eight graders who have had more time at the school.
Petrilli then essentially ends the piece. No look at the data, no possible explanatory factors for why Democracy Prep’s scores dropped.
So, since the experts are neglecting to critically look at the data, amateurs like me are forced to take the reigns. Here’s some data I organized based on the released 2012 scores (adjusted so they would be comparable to 2013’s scores) and this year’s common core scores. First you will see an overview:
So, Democracy Prep’s average scores were never stellar, which is okay because we are more interested in growth than proficiency, but notice how their scores fall by 25 points, while the average charter school’s scores fall by only 20 points. Any unbiased observer should now start wondering, what’s making their scores fall? Is it a problem with all the high performing charter schools, or just Democracy Prep? Table 2 should help with that.
Here I’ve summarized the data, by grade and subject level (Democracy Prep had 16 grades/subjects tested) for the major charter schools and networks in New York City. Notice again, Democracy Prep’s data is nothing spectacular in 2012, but if you look at the change to 2013, Democracy Prep fell by 25 points while the average network fell by 17 points. The only major network that fell by more was National Heritage Academies (the for-profit chain that many could argue is a case in point against vouchers, but that’s for another day). But let’s think about Andrew’s argument: Perhaps Democracy Prep is unfairly biased against because they have the most grade six’s, so it’s harder to catch those students up by the spring (keep in mind my dataset does not include grades that were created this past year, so they would not be distorted from having new sixth grade classes). Let’s take a look.
Table 3 (scores/change by grade for Democracy Prep # classes tested, 2012 mean, change, 2013 mean).
Table 4 (Scores/Change for all of the previous charter networks)
So if you look at the data, you will realize there is some merit to Andrew’s argument. You see great reductions in how much scores fall by for grades 6-8 consecutively, as well as great increases in their average score, especially compared to the slowly increasing average scores for grades 6-8 generally. It’s also possible that Democracy Prep takes a higher proportion of their sixth graders from non-feeder schools since they are primarily a middle school (while KIPP middle schools can take students from KIPP elementary) which could decrease their proficiency scores. One could even argue that the differential between sixth grade scores and eight grade scores suggests that there is a lot of growth happening at Democracy Prep.
So in the end, I’m not sold on Democracy Prep. The data suggests that they were overrated based on last year’s proficiency data, compared to all schools and charter school networks. It also suggests that it is very likely that Democracy Prep is teaching basic skills at the expense of higher-order thinking (see my previous post about interpreting common core data). There are, of course, other explanations for the drop. And I invite people from Democracy Prep and researchers who believe in them to defend themselves. But man, acting as their PR spokesman or blaming the tests themselves (for these failures exclusively) only adds fuel to the fire of those who trust nothing reformers or those sympathetic to their cause say.
Final thought: I have seen this article cross posted on at least three different websites now. Surely, we can all agree, it is not nearly good enough for such syndication.