29 December 2010
The placebo effect has gotten a lot of positive press recently thanks to a new study, by Ted Kaptchuk et al (Harvard Medical School). He and his team took women suffering from irritable bowel syndrome and divided them into treatment and control groups. The control group received nothing while the treated group received twice-daily sugar pills.
Here's the twist, though: Kapthuck and his fellow researchers actually told the treated patients that what they were taking was nothing more than a sugar pill.
According to the study,
Before randomization and during the screening, the placebo pills were truthfully described [to the treated group] as inert or inactive pills, like sugar pills, without any medication in it. Additionally, patients were told that "placebo pills, something like sugar pills, have been shown in rigorous clinical testing to produce significant mind-body self-healing processes."
The end result was that the treated group had an improvement in their reported symptoms by the end of 21 days.
More coverage on the study can be found here and here. An interesting take on the study is NPR's, which reports that one study volunteer said that her symptoms disappeared after just a few days of taking the placebos and that, at the conclusion of the study, she bought placebos of her own. A more critical look is here.
My sense is that there may be more than one treatment being applied here -- not only was the treatment group told about the placebo, but they were actually told that placebos have been shown to "produce significant mind-body self-healing processes." So are these actually placebos?
What do you think? Do you find the study and its design persuasive? And what lessons do we draw from this?
23 December 2010
My post from last week about the Greiner & Wolos Pattanayak study of legal representation prompted many of you to comment (quite thoughtfully, I think) that the legal system appears to be working. After all, if elite lawyers get the same outcome as non-elite lawyers (or self-representation), then that points to the idea that judges really are deciding cases based on the merits -- and not based on the identity of your lawyer.
Here's some context. A few years ago, Congress enacted mandatory criminal sentences that all federal judges had to adhere to. The Supreme Court eventually declared these unconstitutional and the end result is that those sentencing guidelines are now advisory instead of mandatory. Federal judges now have much discretion than they did before in deciding what kinds of sentences to mete out -- and this has led to the fear that different judges will impose harsher or more lenient sentences.
The Scott paper tests this by looking at 2,659 sentences that were handed down by the ten judges in the District of Massachusetts, a federal trial court based in Boston. Although I wish I had seen a more nuanced causal analysis (how were the judges assigned to the cases? was it a random treatment assignment?), I find the end result compelling enough -- which is that the identity of the judge does matter in the eventual sentencing. According to Scott, "[i]n cases not subject to mandatory minimum, the difference between the court's most lenient and more severe judges translates into an average of more than two years in prison." This kind of result dovetails with existing political science literature on judicial decision making, which consistently points to ideology and political beliefs as being very predictive of case outcome.
Given the results of these two papers, maybe what we're seeing here is that it's not the identity of the lawyer that matters in case outcome, but the identity of the judge? If so, then why do litigants (or defendants) spend so much money on lawyers? Shouldn't they be trying to game the system to try to get the best (i.e., most favorable) judges?
15 December 2010
Here's a neat new article called "What Difference Representation?" by Jim Greiner (Harvard Law School) and Cassandra Wolos Pattanayak (Harvard Statistics Department). What Greiner & Wolos Pattanayak have done is is randomize clients calling into a free legal aid clinic into two groups: one group was offered free legal representation by trained students at Harvard Law School, while the second group was not. Not only is the experimental design pretty interesting, but it's hard to overstate how novel this is -- and how unusual it is for a group of lawyers (lawyers-to-be, in this case) to agree to participate in this kind of an experiment.
A basic summary of the results: an offer of free legal representation by an elite cadre of Harvard Law Students does not increase the probability that a client will prevail in his or or her claim. (There was a .04 increase in probability of prevailing, not statistically significant.) What the offer of free legal representation does do, however, is increase the delay that clients experience in the adjudication. (The mean time to adjudication for the treated population was 53.1 days versus 37.3 days for the control group, a statistically significant sixteen-day difference.)
Pretty interesting stuff.
6 December 2010
A recent New York Times article highlights recent research by Dan Cohen and Fred Gibbs that uses computational power and statistics* to answer questions about what Victorian literature says about the Victorians. I've known that this kind of thing was in the works in various parts of the humanities, but I haven't been keeping up. I think this kind of analysis will be making more inroads into the humanities and social sciences in the future (a previous NYT article in the series takes up this issue).
* Ok, they are just using word frequencies at the moment, but the data they are collecting in collaboration with Google made me drool at the possibilities for machine learning applications.
One quick observation:
It was interesting to me that the criticisms of quantitative text analysis are the same in literature as they are in political science.
(1) It lets people get away with not reading or interpreting the texts.
(2) It undermines the ability of research to get nuanced meaning out of texts.
(3) It shapes the kinds of questions researchers ask.
My quick thoughts on these:
(1) People who use statistical text analysis in their work generally have to read a lot of texts. I don't think quantification is a substitute for reading.
(2) Quant text analysis often does gloss over nuanced meanings, but it can often reveal broad trends in a huge body of texts that a close reading of a handful of texts can't.
(3) We tend to only really entertain research questions that we think we have the tools to answer, so until recently, very few people have tried to answer questions where you'd need to read 100,000 documents to get an answer. Putting these types of questions in the realm of possibility is not a bad thing.
2 December 2010
Dan Carpenter came by the workshop yesterday to talk about his paper (with SSS-pal Justin Grimmer, Eitan Hersh, and Brian Fienstein) on close elections and their usefulness for estimating causal effects. A few recent papers exploit these marginal elections to answer the a general class of questions: how does being elected to office affect a person’s outcomes? At the least, office getting into office is a fairly big boost to resume and, further, while in office there are various (ahem) business opportunities that may or may not be entirely legal. Fans of politics (including political scientists) have a keen interest in the effect of office-holding on re-election, commonly known as the incumbency advantage.
Obviously, simply looking at winners and losers is a problematic strategy, to say the least. So, instead, we look for winners and losers whose elections were randomly determined. And extremely close two-candidate elections seem to fit the bill. Poor weather, ballot miscounts, and voting errors can all push a narrow election margin to either candidate. Thus, the argument goes, vote share counts right around 50-50 essentially assign the office by coin flip. Thus, comparing winners and losers around the that cutoff can actually estimate causal effects.
What Dan and his coauthors point out, though, is that some candidates might have more control than others over that coin flip. And candidates and parties are likely to devote more of their resources to those close elections than to safe ones. In fact, they show that candidates with structural advantages in these resources are far more likely to win these close elections. In the above graph, you can see that winners of House elections in the U.S. are much more likely to share the party of the current governor of the state, even when we restrict the sample to +/- 2% around the 50-50 mark. This indicates that there may be deeper imbalances between winners and losers, even very close to the 50-50 mark.
They suggest the fundamental differences between winners and losers in these close elections could come from two sources: ability to get-out-the-vote pre-election or successful legal challenges post-election. If those ballot miscounts get recounted or thrown out in favor of a candidate due to better legal maneuvering, then those aren’t terribly random are they? The main critique here is that causal effects are hard to find when we compare winners and losers in close elections and we have to make sure that our proposed “randomizations” make sense theoretically and hold empirically.