Yo Teach…! Or how to avoid teaching like Jason

Closing the Teach For America Blogging Gap
Jul 27 2013

The bigger trouble with economists: a response to Rick Hess

If anyone were to ask me to list the five current thinkers in education I admire most, find the most interesting, or whose books I would most like to be stuck with on a desert island, I would certainly have Rick Hess in one of my top three slots. Hess is a philosophically conservative thinker–he generally does not trust the technocratic planners to micromanage classrooms from afar–but his critical eye often falls on those who believe choice and educational markets are a panacea as well. His disdain for the three unfortunate pillars of most educational discourse, dogmatism, ad-hominem attacks, and silver bullets makes me enjoy his work even though I frequently disagree with him.

 

But because of our philosophical differences, I am often bemused by the things that drive him crazy. For example, I expected Hess’ blogpost entitled “The Trouble with Economists” to be a critique of the way quantitative methods has a revered status in social science, leading many to make broad generalizations from easily quantifiable but oversimplified data using dubious statistical techniques or isolated “experiments” that can’t necessarily scale. He certainly makes this point, but what sets him off is the economist James Heckman defending the significantly positive results from a thirty year study of the Perry Preschool Program, claiming that “It is as good a trial for effectiveness as those we currently rely on to evaluate prescription and over-the-counter drugs.” Hess aptly points out the silliness of such a claim in the social sciences, since pharmaceutical trials do not suffer from the Hawthorne effect (changes in participant behavior because they are part of an experiment) and other realities that make social scientific research inherently less scientific and generalizable. But such a slip-up does not infuriate me. Indeed, randomized control trials are the best forms of social scientific research, and their findings should not be dismissed without real engagement simply because scaling may be a problem.

 

Indeed, as Charles Payne writes in the introduction to his wonderful book “So Much Reform, So Little Change,” it is important to look at the most robust, outlier cases, not to show us what will happen if a particular program is expanded, but to show us what is possible. It’s important to know that when done right, preschool does have an important impact on student outcomes in the long-term. It gives educators a trajectory, an end that we know is worth fighting for. Indeed, Hess has endorsed this view regarding the achievements and potential of some charter school networks that have the autonomy to transform school organization, even though their philosophies have not yet had a significant impact when scaled. Indeed, many of his empirical and theoretical discussions of such schools have dramatically changed my understanding of what is possible for school improvement. So while his wording was silly, it is certainly understandable that Heckman wants the public to realize the import of his study as pre-k reform gained political traction.

 

A much more troubling application of the economics discipline to educational issues, I believe, is embodied by the most prolific educational economist, Eric Hanushek (ranked #4 on Hess’s list of most influential educational scholars). In “Creating a New Teaching Profession,” a book of essays featuring Hess and a powerful essay by the economist Alan Blinder, Hanushek argues for “teacher deselection.” Deselection, you will learn, is Orwellian double speak for “mass firing.” He presents quantitative causal arguments that good education leads to economic growth, good teachers lead to good education, and then, intuitively (to some economists I guess), firing the worst 10% of teachers will increase the average test scores of students, and thus, the national economy.

 

That’s his argument. You may find yourself asking: What tests will we use to assess test effectiveness? Will we measure growth rates or proficiency rates? Will principals have a say? How will we control for principal effectiveness, demographics, or serving students with special social or academic needs? Who will replace these teachers? What will we do in schools that have large majorities of these teachers? How will losing 10% of peers affect the remaining teachers’ quality? In addressing these complexities, Hanushek merely offers performance incentives for those who remain, to boost moral enough to overcome fear of losing one’s job. Despite the hundreds of implementation questions anyone who has ever stepped inside a school knows would dramatically affect the outcome of such a “de-selection” , Hanushek believes he can use statistical techniques to claim that had we “deselected” teachers in the 1990s, we not not only could expect .28-.42 standard deviation increase on student test scores, but also  a 300 billion dollar increase in 2008’s GDP.

 

This is the trouble with economists. They tease out significant statistical relationships (let’s say between teacher quality and academic performance) from past data using a variety of methods that are rarely fully viewable to the public. They do not discuss how the data is gathered for either variable or their inherent limitations. They then use the coefficients of their statistical model to make the counterfactual (and obviously untestable) claim that if we had changed the quality of teaching, academic outcomes and GDP would have increased, all the while providing neither discussion of how we would change the quality of teaching besides just getting rid of the lowest data points, nor analysis of how dramatically trimming the teaching population could have unintended side effects (like bigger class sizes, a rush of new inexperienced teachers filling the gaps, cheating on exams–or whatever variable they are now measured by).

 

Hanushek’s approach emphasizes the blind trust in numbers and willful denial of the complexities of dealing with actual humans in actual organizations. Nevertheless, Hanushek has been cited 32,000 times by academic papers and books alone, not to mention the influence he’s had on educational politics. Not all his articles rest on such a tenuous foundation, but never have I read a piece of his that grapples with, let alone acknowledges, the complexities of the organizations for which he prescribes uniform, obtuse tools. Even though the conclusions Hanushek draws often support a conservative agenda, I wish Hess would utilize his conservative philosophy  to call out the hubris of trying to micromanage millions of lives in tens of thousands of complex organizations based on an inherently limited statistical study.

- max

One Response

  1. B

    Yup.

About this Blog

Pontifications of the Unplaced

Region
Detroit

Subscribe to this blog (feed)


Archives

Categories