26 June 2008
A few bloggers at other sites (Concurring Opinions and Election Law Blog) have pointed out an interesting footnote in the Supreme Court's recent decision on punitive damages in the Exxon Valdez case. Justice Souter took note of experimental research on jury decisionmaking done by Cass Sunstein, Daniel Kahneman, and others, but then dismissed it for the purposes of the decision because Exxon had contributed funding for the research:
The Court is aware of a body of literature running parallel to anecdotal reports, examining the predictability of punitive awards by conducting numerous “mock juries,” where different “jurors” are confronted with the same hypothetical case. See, e.g., C. Sunstein, R. Hastie, J. Payne, D. Schkade, W. Viscusi, Punitive Damages: How Juries Decide (2002); Schkade, Sunstein, & Kahneman, Deliberating About Dollars: The Severity Shift, 100 Colum. L. Rev. 1139 (2000); Hastie, Schkade, & Payne, Juror Judgments in Civil Cases: Effects of Plaintiff’s Requests and Plaintiff’s Identity on Punitive Damage Awards, 23 Law & Hum. Behav. 445 (1999); Sunstein, Kahneman, & Schkade, Assessing Punitive Damages (with Notes on Cognition and Valuation in Law), 107 Yale L. J. 2071 (1998). Because this research was funded in part by Exxon, we decline to rely on it.
It will be interesting to see whether this position is taken up by the lower courts; if so, we might see less incentive for private actors to fund social science research. That could be good or bad, I suppose, depending on one's views of likelihood that researchers will be unduly influenced by their funding sources.
13 June 2008
Two awards given by the Society for Political Methodology were announced today, and both of them went to IQSS faculty members (and co-authors).
The Gosnell Prize is given to the "best paper on political methodology given at a conference", and this year's prize was awarded to Kevin Quinn for his paper "What Can be Learned from a Simple Table? Bayesian Inference and Sensitivity Analysis for Causal Effects from 2x2 and 2x2xK Tables in the Presence of Unmeasured Confounding." From the announcement:
Quinn's paper offers a set of steps to improve inference with binary independent and dependent variables and unmeasured confounds. He derives large sample, non-parametric bounds on the average treatment effect and shows how these bounds do not rely on auxiliary assumptions. He then provides a graphical way to depict the robustness of inferences as one changes assumptions about the confounds. Finally, he shows how one can use a Bayesian framework relying on substantive knowledge to restrict the set of assumptions on the confounds to improve inference.
The Warren Miller prize is given annually to the best paper appearing in Political Analysis. This year's prize has been awarded to Daniel E. Ho, Kosuke Imai, Gary King, and Elizabeth A. Stuart for their article, "Matching as Nonparametric Preprocessing for Reducing Model Dependence in Parametric Causal Inference." The abstract of their paper follows:
Although published works rarely include causal estimates from more than a few model specifications, authors usually choose the presented estimates from numerous trial runs readers never see. Given the often large variation in estimates across choices of control variables, functional forms, and other modeling assumptions, how can researchers ensure that the few estimates presented are accurate or representative? How do readers know that publications are not merely demonstrations that it is possible to find a specification that fits the author's favorite hypothesis? And how do we evaluate or even define statistical properties like unbiasedness or mean squared error when no unique model or estimator even exists? Matching methods, which offer the promise of causal inference with fewer assumptions, constitute one possible way forward, but crucial results in this fast-growing methodological literature are often grossly misinterpreted. We explain how to avoid these misinterpretations and propose a unified approach that makes it possible for researchers to preprocess data with matching (such as with the easy-to-use software we offer) and then to apply the best parametric techniques they would have used anyway. This procedure makes parametric models produce more accurate and considerably less model-dependent causal inferences.