November 2009
Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30          

Authors' Committee


Matt Blackwell (Gov)


Martin Andersen (HealthPol)
Kevin Bartz (Stats)
Deirdre Bloome (Social Policy)
John Graves (HealthPol)
Rich Nielsen (Gov)
Maya Sen (Gov)
Gary King (Gov)

Weekly Research Workshop Sponsors

Alberto Abadie, Lee Fleming, Adam Glynn, Guido Imbens, Gary King, Arthur Spirling, Jamie Robins, Don Rubin, Chris Winship

Weekly Workshop Schedule

Recent Comments

Recent Entries



SMR Blog
Brad DeLong
Cognitive Daily
Complexity & Social Networks
Developing Intelligence
The Education Wonks
Empirical Legal Studies
Free Exchange
Health Care Economist
Junk Charts
Language Log
Law & Econ Prof Blog
Machine Learning (Theory)
Marginal Revolution
Mixing Memory
Mystery Pollster
New Economist
Political Arithmetik
Political Science Methods
Pure Pedantry
Science & Law Blog
Simon Jackman
Social Science++
Statistical modeling, causal inference, and social science



Powered by
Movable Type 4.24-en

« November 7, 2009 | Main | November 12, 2009 »

11 November 2009

Answering "why" questions

Brandon Stewart pointed me to an interesting blog post by Andrew Gelman that touches on the issue of explaining the "causes of effects." The basic point is that "why" questions are difficult to answer in a potential outcomes framework but often we really care about them. Some folks in political science have gone so far as to argue that researchers using "qualitative" methods are more inclined (and better able) to tackle these "why" questions than their "quantitative" colleagues who mostly focus on "effects of causes."

This has been on my mind lately -- as part of a class in the statistics department, I've had several conversations with Don Rubin about how retrospective "case-control" studies might fit into the potential outcomes framework. The goal of the medical researchers that execute these studies is usually a "why" question: why did an outbreak of rare disease X occur, which genes might cause breast cancer, etc. Case-control studies and their variants are great for searching over a number of possible causes and pulling out the ones that have strong associations with the outcome, but they aren't so great for estimating treatment effects. Rubin suggests that the proper way to proceed is probably to first use a case-control study to search over a number of possible causes and then estimate treatment effects for the most likely causes using a different sampling method (matched sampling for situations where the research has to be observational, experimentation when it's possible). It seems like this already happens to some extent in biostatistics and epidemiology and it also happens informally in political science.

I think this formulation suggests that answering a "why" question requires both "causes of effects" and "effects of causes" approaches; we need to search over a number of possible causes to identify likely causes, but we also need to test the effectiveness of each likely cause before we can say much about the causal effect. We probably still can't answer questions like "what caused World War I" but maybe this gets us somewhere with more tractable types of "why" questions.

Posted by Richard Nielsen at 4:30 PM