November 2010
Sun Mon Tue Wed Thu Fri Sat
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30        

Authors' Committee


Matt Blackwell (Gov)


Martin Andersen (HealthPol)
Kevin Bartz (Stats)
Deirdre Bloome (Social Policy)
John Graves (HealthPol)
Rich Nielsen (Gov)
Maya Sen (Gov)
Gary King (Gov)

Weekly Research Workshop Sponsors

Alberto Abadie, Lee Fleming, Adam Glynn, Guido Imbens, Gary King, Arthur Spirling, Jamie Robins, Don Rubin, Chris Winship

Weekly Workshop Schedule

Recent Comments

Recent Entries



SMR Blog
Brad DeLong
Cognitive Daily
Complexity & Social Networks
Developing Intelligence
The Education Wonks
Empirical Legal Studies
Free Exchange
Health Care Economist
Junk Charts
Language Log
Law & Econ Prof Blog
Machine Learning (Theory)
Marginal Revolution
Mixing Memory
Mystery Pollster
New Economist
Political Arithmetik
Political Science Methods
Pure Pedantry
Science & Law Blog
Simon Jackman
Social Science++
Statistical modeling, causal inference, and social science



Powered by
Movable Type 4.24-en

« The Future of Bayes | Main | Weathermap History of the US Presidential Vote »

19 November 2010

Seven Deadly Sins, Revisited

Gelman responds to Phil Schrodt’s take-down of statistical methodology, about which we commented a while back. To my mind, he has a strange take on the piece. To wit:

Not to burst anyone’s bubble here, but if you really think that multiple regression involves assumptions that are too much for the average political scientist, what do you think is going to happen with topological clustering algorithms, neural networks, and the rest??

Gelman is responding to two of Schrodt’s seven sins: (1) kitchen-sink regressions with baseless control sets and (2) dependence on linear models at the exclusion of other statistical models. I think that Gelman misinterprets Schrodt’s criticism here. It is not that political scientists somehow lack the ability to comprehend multiple regression and its assumptions. It is that political scientists are being lazy intellectually (possibly incentivized by the discipline!) and fail to critically examine their analysis or their methods. It’s a failure of standards and work, not a failure of intellect. Thus, I fail to see the contradiction in Schrodt’s advice or his condemnation—-it’s a call to thinking more about our data and they fit with our models and their assumptions. Now, one may think that this is beyond the abilities of folks, but I fail to see that argument being made by Schrodt (and I am certainly not making it).

Gelman himself often calls for simplicity:

I find myself telling people to go simple, simple, simple. When someone gives me their regression coefficient I ask for the average, when someone gives me the average I ask for a scatterplot, when someone gives me a scatterplot I ask them to carefully describe one data point, please.

This seems more about presentation of results or a failure to know the data. There is a huge challenge in more complicated models since they often require more care and attention to how we present the results. All of the techniques Gelman describes should be essential parts of the data analysis endeavor. That people fail to do these simple tasks speaks more to Schrodt’s accusation of “intellectual sloth” than anything.

Finally, we can probably all get behind a couple of commandments:

  1. Know thy data.
  2. Use this knowledge to find the best method for thy question.

I see both Gelman and Schrodt making these points, albiet differently. While Schrodt sees a violation of 2 as primarily due to intellectual laziness, Gelman see it as primarily due to intellectual handicaps. Both are slightly unfair to academics, but sloth is at least curable.

Posted by Matt Blackwell at November 19, 2010 6:55 PM