Yo Teach…! Or how to avoid teaching like Jason

Closing the Teach For America Blogging Gap
Aug 12 2013

The Difference Between Charter and Public: A Nuanced Look at NYC Common Core Data

In an initial take on the data from the new common core assessments in New York, Gary Rubinstein does something very interesting: He compares 2012 results with 2013 results for charter and public schools. Why is this so interesting? The common core assessments are supposedly test higher-order thinking skills. Therefore, schools that won accolades for high test results in the past by focusing practically all instruction on discrete, low-level, test taking skills at the expense of higher-order, problem-solving and critical thinking schools may suddenly get exposed. On the other hand, schools focusing on deeper learning that may have been undervalued by previous low-level standardized tests may suddenly flourish. This is, I believe, a one time phenomenon (for each state the new tests are implemented). Over time, those test-taking schools will potentially learn how to mobilize around a new set of tests, while still managing to crowd out deeper instruction. So it’s a rare chance to rank schools not on what they’ve traditionally been tested on, but what they may be leaving out to provide room for test preparation.

In general, I think Gary Rubinstein chose to be forceful and quick with his remarks, perhaps at the expense of being nuanced and thorough. He admittedly focused on just one grade, and focused his comparisons on the famous charter school chains (and let’s be honest, chains he may have enjoyed embarrassing), not necessarily all charter schools (and I understand…it takes a while to label each charter school in your data!) But it means we can’t really generalize his findings.

I’m hoping to discuss the data in a way that is a bit more nuanced and generalizable. I used data for all grade levels/subjects, and ensured that all schools were correctly classified as charter or traditional public. I went into the process having no idea what results I would find, and hope any readers find my analysis to be, if nothing else, intellectually honest.

Quick note on methodology

Thanks to Gary and a few dozen websites, I was able to find two excel spreadsheets. One broke down 2012 test scores by building, grade level, and subject for the state of new york. Another did the same thing for 2013 common-core aligned tests. I then went through the arduous process getting rid of non-new york city data (I also got rid of Staten Island because I was a bit confused by its classification in the document), and then getting rid of grade levels/subjects that did not have test scores for both 2012 and 2013 (new grades, new schools, etc.). With the 8,248 grade levels/subjects (example, KIPP Excel Academy 7, grade 3, Math, would be one row, ELA and grades 4, 5, 6, 7, 8 would all be separate rows–there is no high school data), I combined the two spreadsheets so we would see 2012 and 2013 scores next to each other. However, 2012 scores were out of about 775 and the proficiency cut-off was about 669 (it varies by grade level). The 2013 scores vary similarly but at much lower quantities, having 318 as the proficiency cut-off (again, give-or-take about 5). Because I didn’t want it to look like every school dropped by 300 points, I converted the 2012 scores into 2013 units by dividing them by 2.1. This way they were roughly comparable. Of course, this also leads to some grades being a bit favored over others, but since I was comparing charters v public schools, rather than particular grade levels, I let it slide.

 

I also must admit that I am not a pro at excel. I did the best I could, but for anyone who wants to improve upon my methodology, fact-check, or take the data and do more interesting things with it, feel free to email me (maxyurkofsky@gmail.com) and I’ll send you my excel spreadsheet.

 

Before going into the specific research questions I will answer, I wanted to take a moment and discuss the assumptions underlying a project like this.

 

My potentially incorrect assumptions (New Yorkers please provide feedback if I am wrong)

1. Generally speaking, the common core assessments (imperfectly) focus less on measuring  content-oriented skills (low blooms) and more on measuring higher level problem solving/critical thinking (higher blooms).

2. New York schools have not yet had the time to completely realign their curriculum and pedagogy to address these higher order thinking skills in response to the tests. I assume there were some relatively superficial shifts to test preparation for these assessments, but nothing altering the fundamental organization of the school or classroom. For the reasons described in the introduction, this is important.

 

Research questions and why they’re important

1. Are charter schools more likely to have a drop in scores due to the shift to common core than public schools?

What it Might Tell us…

Put simply, if my assumptions hold, schools that experience a dramatic drop in scores betray a previous alignment around low-order thinking, test-prep-oriented, instruction. Conversely, substantial growth in scores could be explained by arguing that such schools had previously aligned around higher-level instruction at the expense of test-prep skills, and now such instruction is paying off. So if charter schools are generally more aligned around test prep and low-level instruction, as many claim, this is the best chance at seeing evidence of that, since as time goes on they will figure out how to align around these new standards. Just as an example, I previously taught at the quintessential deprofessionalizing test-prep charter school in Detroit. Our instruction was superficial, our students were only taught how to write standardized test responses, rather than meaningful pieces, and their reading/math skills were almost entirely limited to testing situations. I cannot wait to see how this school responds to different, higher level common core tests, where such superficial instruction will hopefully not pay off.

 

2. Is the effect of the transition to common core assessments more uniform for charter schools versus traditional public schools?

What it Might Tell us…

This question is a little more difficult to wrap my head around. In general, I’m wondering how much school-school variation there is in terms of the change in test scores, not necessarily their values. We can look at this in two ways. First, what is the standard deviation of the change from 2012-2013. If a group of schools have a very low SD, then they are probably very similar in how their instruction is in harmony or conflict with that high-level instruction. High standard deviation shows that, in general, the schools are diverse in how they implement instructional quality. Second, we can do a simple line of best fit for charter/traditional schools (separately) to see if, in general, past “good” schools are more likely to have gains or losses when switching to common core. This tells us if we can make any generalizations about how/whether schools we once viewed as “good” or “bad” are more likely to teach higher-order thinking skills.

 

3. How did the most well-respected/fasted expanding charter schools weather the shift compared to the rest of schools?

In my next post, I will calculate these scores again for the largest and fastest growing charter networks, to see if we can give praise where it is due or expose some charter school test-prep factories.

 

Data/Discussion.

1. Are charter schools more likely to have a drop in scores due to the shift to common core than public schools?

Column1

2012 mean

2013 mean

Change

Charter

322.931059

302.8089552

-20.12210377

Traditional

319.6687289

294.2064959

-25.46223295

 

In general, the answer seems to be “no.” Traditional scores actually tended to drop more on average, so the New York City gap between charter and public schools is growing as a result of the common core tests. This does not mean charter schools are better, nor does it mean that they necessarily already have higher levels of instruction. It simply means that a very simple reading of the data does not support the conclusions that charter schools have generally been “exposed” as teaching at a very low level. I think Gary was misled by focusing only on the charter schools he knew about, which as most already know, got a lesson in humility from the common core. But we can’t generalize to all charter schools.

 

2. Is the effect of the transition to common core assessments more uniform for charter schools versus traditional public schools?

 

Column1

Standard Deviation of Change

Charter Schools

10.39535341

Traditional Schools

13.86767835

 

So, as many might expect, there is a bit more standard deviation in traditional public schools. However, it’s kind of hard to see what the differences in standard deviation look like, so below are scatter plots of traditional v charter schools, where the 2012 scores are compared with their change in scores moving to the common core assessments.

 *UDPDATE* The Y axis for traditional schools should read change, not “2013 Scores”. My bad.

In terms of the simple deviation, it’s worth noticing the range in changes for each type of school. Charter schools tend to be clumped between a change of -50 and 5. Traditional public schools have a larger clump of schools ranging from -65 to 20. So public schools reacted to the common core shift in much more varied ways than charter schools. But I think anyone looking at the two scatter plots are noticing something far more interesting.

This get’s me to that second question:

Do traditional or charter schools tend to have more of a trend in how high/low scoring schools respond to shifts to the common core?

Finally, an interesting answer, yes! For charter schools, there is essentially no relationship between 2012 scores and how much their scores changed when shifting to common core. While the few highest scoring schools of 2012 dropped by about the average amount (20 points), the high-middle to lowest performing charter schools were all equally likely to change above or below the -20 mean. So, we can infer that relatively higher-order instruction and test-prep oriented instruction are equally distributed among charter schools of almost all perceived levels of quality. Notice one more thing about charters. Almost none of them actually improved in their transition to the common core. One could interpret this as meaning almost no charter schools served as an undervalued loci of high-order instruction that are only now being allowed to flourish on new tests. Indeed, of the 307 grade levels/subjects that saw improvement from 2012 to the common core assessments, only 12 were from charter schools (3.9%). Though keep in mind, charter schools make up 8% of the grade levels tested.

 

Still, compare this to traditional schools. Indeed, a handful of grade levels in the bottom 50% (based on 2012 scores) showed improvement on these more rigorous tests, and a significant proportion of the grade levels in the middle of the pack improved. This, I believe, is evidence that there may be many more traditional public schools teaching higher-order instruction that may not be recognized on traditional standardized tests. Remember, this is not really the case for charter schools. On the other hand, there were a lot of traditional public school grade levels that fell dramatically from 2012-2013 (hence the high standard deviation). These grade levels were primarily from the bottom 50% based on 2012 scores. On the one hand, this tells us that there seem to be far fewer well regarded public school test prep factories whose test results may be overstating the quality of instruction. On the other hand, the huge dip for a large proportion indicates that there may be some deep problems in some public schools around low-order teaching. Please keep in mind that some of the top performing public schools are selective. This undoubtedly as a skewing effect on the top performing grade levels/subjects. If someone has a list of the selective K-8 schools, that would be helpful in moving forward with these results.

 

Summary

1. Charter school grades fall less on average than public school grades, but traditional public schools have many more examples of improvements and great declines.

2. 2012 Charter school performance is a poor indicator how they respond to common core tests. Many high performing grades drop a lot, many low performing grades drop less. High-order instruction is therefore not likely to correlate with high 2012 test scores for charter schools. Don’t go to a charter with a past record of achievement and necessarily expect to find the best instruction of your life.

3. Many high and middle performing traditional public school grade levels improved a lot from 2012-2013, suggesting lots of higher order instruction that is (for some) only now being evidenced on higher-rigor common core tests. We need to find ways of seeking out, supporting, and allowing these schools to be role models for other public schools.

4. Many low and middle performing traditional public school grade levels decreased a lot from 2012-2013, suggesting very low levels of instructional rigor. How can we help those schools without hurting the equally large group of traditional public schools that seem to be on the right track?

 

More to come.

-Max

2 Responses

  1. Andrew

    Max – Thanks for this interesting analysis. Lot’s to chew on. Moving forward, I think we need more analysis of the differences between the 2012 and 2013 tests. It is far too simplistic to describe one as “measuring content-oriented skills (low blooms)” and the other “more on measuring higher level problem solving/critical thinking (higher blooms)”. To begin with, there is an explicit emphasis in the ELA common core on reading and writing non-fiction. Secondly, there is an effort to establish a higher standard in terms of the complexity of texts students read (both fiction and non-fiction). If, in fact, the 2013 vs 2012 exams reflected the pivot towards non-fiction and greater text complexity (I’m not yet familiar enough with the 2 exams to make this determination), then there can be many explanations for the various patterns that you note. Perhaps schools that experienced the greatest gains (or lowest drops) – both charter and traditional public – were more focused on having kids read and analyze authentic non-fiction, or simply having them read the actual texts selections that appeared on the 2013 test. It’s not productive to assume a simple dichotomy between “lower order content-oriented skills” and “higher order problem solving/critical thinking”. Problem-solving can be elementary depending on the problem and content can be intellectually challenging depending on the content. Then, of course, the confluence of content and critical problem solving on any given task can present a wide spectrum of cognitive and dispositional demands. How all this washes out in the two exams, and how instruction at the various schools linked to performance on the tests are very, very complicated questions.

    With all that said – and here I should note that I am a former TFA’r and current professor of English education, consulting a number of KIPP schools on reading, writing and curriculum design – I heartily agree with you that Rubenstein overreaches in his conclusion that the new tests have “exposed” the high-profile charter schools. It’s wrong to fault these schools for preparing their students well for tests to which they are held accountable, even weak tests. I also agree with your prediction that these “test-taking schools will potentially learn how to mobilize around a new set of tests” (although I wouldn’t apply the tag “test taking” to the KIPP schools that I have consulted, and don’t think it’s inevitable that they will “crowd out deeper instruction.” I wouldn’t bet against KIPP NYC and NJ schools quickly figuring out the common core based tests in a year or two, and if the tests truly measure skills and knowledge that we value and predict future academic success, than that could only be a good thing. Again though, first we need to determine exactly what the new tests measure and predict.

    • yoteach

      Andrew, thanks for the incredibly thoughtful critique and discussion. You’re right: I am certainly oversimplifying the differences between the tests, and doing so based more on common perceptions of the differences between the two tests, rather than a rigorous analysis. I think my rush to question the quick data conclusions made by many made me sacrifice totally knowing what I was talking about in relation to the tests themselves. I’ll certainly do that before I talk about them again. Do you recommend any high quality discussions about the differences (I guess for either this NYC assessment or PARCC and Smarter Balanced)?

      So I believe what you are saying is that there are certainly a number of ways the common core assessments diverge from the 2012 assessments in terms of content, not just level of rigor. As a result, the common core assessment may not be that transparent looking-glass I claim into rigor of instruction, rather than content of instruction. A school could therefore teach literacy in a deep and complex way but might see a drop in test results if they previously focused on fiction rather than nonfiction. Meaning that there are a lot of potentially confounding variables that prevent us from drawing such strong conclusions on a school/network level. Thanks for clarifying/amending that first assumption I made.

      You are also absolutely right in distinguishing between the kinds of low-level test-prep factories I allude to and the higher level mobilization around academic success/standards that I probably do not do an explicit enough job endorsing. Though I guess here is where you and I may diverge in terms of vision (not just my relative ignorance):

      You argue that “It’s wrong to fault these schools for preparing their students well for tests to which they are held accountable, even weak tests.” I don’t totally disagree , but I think that this can often be done in ways that crowd out deeper instruction that focuses on developing those critically important but untestable traits/skills. You’re right that many top performing schools can succeed on these exams without crowding out this instruction, and I believe you that KIPP probably fits in this category of schools. Indeed, they have already pioneered huge transformations in terms of character education that are rippling through schools across the country. That was in response to data that they themselves gathered and held themselves accountable to. My difference with you is that I am incredibly worried about this happening on the aggregate, even if it doesn’t happen in our best schools. KIPP, UNCOMMON, and other high achieving networks are incredible organizations that are able, through leadership, thoughtful planning and human capital, to both exceed academic standards and achieve other non-tested educational outcomes. But the schools that often attempt to replicate these models do so without the organizational ability to do both these things at once. Instead, they focus on what’s measurable at the expense of what is not. And in these cases, I absolutely do fault these schools, especially when test results affect their status but not student opportunities (aka non HS graduation/AP exams). As someone who suffered through a school like this (we could call it a “no excuses bastardization”) it is incredibly depressing witnessing such organizations heralded as solutions, their leaders revolutionaries to be handed more resources and power. Even though it’s imperfect, I believe these moments are important to *begin* (with many important steps to follow) to investigate the schools that knowingly teach content at the expense of the more meaningful instruction. Because soon these kinds of schools will find away of mobilizing around these new tests in a superficially successful but hollow way. Do you have suggestions for a better way to achieve this goal?

      I realize I sound a bit like I’m describing many public or charter schools as fundamentally immoral. I don’t mean to, though there are certainly some examples. I rather find them to be organizationally irrational, and as a result forced to mobilize around the most measurable and prescribeable indicators.

About this Blog

Pontifications of the Unplaced

Region
Detroit

Subscribe to this blog (feed)


Archives

Categories