May 2012
Sun Mon Tue Wed Thu Fri Sat
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31    

Authors' Committee

Chair:

Matt Blackwell (Gov)

Members:

Martin Andersen (HealthPol)
Kevin Bartz (Stats)
Deirdre Bloome (Social Policy)
John Graves (HealthPol)
Rich Nielsen (Gov)
Maya Sen (Gov)
Gary King (Gov)

Weekly Research Workshop Sponsors

Alberto Abadie, Lee Fleming, Adam Glynn, Guido Imbens, Gary King, Arthur Spirling, Jamie Robins, Don Rubin, Chris Winship

Weekly Workshop Schedule

Recent Comments

Recent Entries

Categories

Blogroll

SMR Blog
Brad DeLong
Cognitive Daily
Complexity & Social Networks
Developing Intelligence
EconLog
The Education Wonks
Empirical Legal Studies
Free Exchange
Freakonomics
Health Care Economist
Junk Charts
Language Log
Law & Econ Prof Blog
Machine Learning (Theory)
Marginal Revolution
Mixing Memory
Mystery Pollster
New Economist
Political Arithmetik
Political Science Methods
Pure Pedantry
Science & Law Blog
Simon Jackman
Social Science++
Statistical modeling, causal inference, and social science

Archives

Notification

Powered by
Movable Type 4.24-en


December 29, 2010

moving toward truthful placebos

The placebo effect has gotten a lot of positive press recently thanks to a new study, by Ted Kaptchuk et al (Harvard Medical School). He and his team took women suffering from irritable bowel syndrome and divided them into treatment and control groups. The control group received nothing while the treated group received twice-daily sugar pills.

Here's the twist, though: Kapthuck and his fellow researchers actually told the treated patients that what they were taking was nothing more than a sugar pill.

According to the study,

Before randomization and during the screening, the placebo pills were truthfully described [to the treated group] as inert or inactive pills, like sugar pills, without any medication in it. Additionally, patients were told that "placebo pills, something like sugar pills, have been shown in rigorous clinical testing to produce significant mind-body self-healing processes."

The end result was that the treated group had an improvement in their reported symptoms by the end of 21 days.

More coverage on the study can be found here and here. An interesting take on the study is NPR's, which reports that one study volunteer said that her symptoms disappeared after just a few days of taking the placebos and that, at the conclusion of the study, she bought placebos of her own. A more critical look is here.

My sense is that there may be more than one treatment being applied here -- not only was the treatment group told about the placebo, but they were actually told that placebos have been shown to "produce significant mind-body self-healing processes." So are these actually placebos?

What do you think? Do you find the study and its design persuasive? And what lessons do we draw from this?

Posted by Maya Sen at 12:48 PM

December 23, 2010

discrepancies in criminal sentencing

My post from last week about the Greiner & Wolos Pattanayak study of legal representation prompted many of you to comment (quite thoughtfully, I think) that the legal system appears to be working. After all, if elite lawyers get the same outcome as non-elite lawyers (or self-representation), then that points to the idea that judges really are deciding cases based on the merits -- and not based on the identity of your lawyer.

With that said, I wonder how those of you will react to a new Stanford Law Review study by Ryan W. Scott (Indiana Univ School of Law), which looks at criminal sentencing disparities.

Here's some context. A few years ago, Congress enacted mandatory criminal sentences that all federal judges had to adhere to. The Supreme Court eventually declared these unconstitutional and the end result is that those sentencing guidelines are now advisory instead of mandatory. Federal judges now have much discretion than they did before in deciding what kinds of sentences to mete out -- and this has led to the fear that different judges will impose harsher or more lenient sentences.

The Scott paper tests this by looking at 2,659 sentences that were handed down by the ten judges in the District of Massachusetts, a federal trial court based in Boston. Although I wish I had seen a more nuanced causal analysis (how were the judges assigned to the cases? was it a random treatment assignment?), I find the end result compelling enough -- which is that the identity of the judge does matter in the eventual sentencing. According to Scott, "[i]n cases not subject to mandatory minimum, the difference between the court's most lenient and more severe judges translates into an average of more than two years in prison." This kind of result dovetails with existing political science literature on judicial decision making, which consistently points to ideology and political beliefs as being very predictive of case outcome.

Given the results of these two papers, maybe what we're seeing here is that it's not the identity of the lawyer that matters in case outcome, but the identity of the judge? If so, then why do litigants (or defendants) spend so much money on lawyers? Shouldn't they be trying to game the system to try to get the best (i.e., most favorable) judges?

Posted by Maya Sen at 8:30 PM

December 15, 2010

experiments and legal representation

Here's a neat new article called "What Difference Representation?" by Jim Greiner (Harvard Law School) and Cassandra Wolos Pattanayak (Harvard Statistics Department). What Greiner & Wolos Pattanayak have done is is randomize clients calling into a free legal aid clinic into two groups: one group was offered free legal representation by trained students at Harvard Law School, while the second group was not. Not only is the experimental design pretty interesting, but it's hard to overstate how novel this is -- and how unusual it is for a group of lawyers (lawyers-to-be, in this case) to agree to participate in this kind of an experiment.

A basic summary of the results: an offer of free legal representation by an elite cadre of Harvard Law Students does not increase the probability that a client will prevail in his or or her claim. (There was a .04 increase in probability of prevailing, not statistically significant.) What the offer of free legal representation does do, however, is increase the delay that clients experience in the adjudication. (The mean time to adjudication for the treated population was 53.1 days versus 37.3 days for the control group, a statistically significant sixteen-day difference.)

Pretty interesting stuff.

Posted by Maya Sen at 10:10 AM

January 21, 2010

visualizing the campaign finance case

My colleague, Brandon Stewart, oriented me to this neat webpage, manyeyes.alphaworks.ibm.com, an IBM-developed web site that allows you to upload data quickly and visualize it using a variety of techniques.

Many Eyes lets you use textual data, so I just tried it out using the majority and dissenting opinions from Citizens United v. FEC, today's Supreme Court's decision striking down existing campaign finance law. (Note: Let's just say it's not a bad idea to use publicly available, non-copyrighted data.)

The resulting visualizations are just terrific, and they actually go far in illustrating the substantive differences between the conservative and liberal Justices on the campaign finance issue.

The first figure represents the majority opinion (written by Justice Kennedy, a moderate-conservative), with the larger words representing phrases used most frequently in the course of the opinion. Obviously, what we see is a strong consideration of "speech" interests -- no doubt discussed in the context of First Amendment issues.

kennedy.png

By contrast, take a look at the dissenting/concurring opinion (written by Justice Stevens, a liberal). The most frequently used words here are "corporate," "corporation," "corruption," etc. The actual phrase "speech" is much less frequent, suggesting that the liberal Justices were more concerned with corporations influencing elections than free speech issues.

stevens.png

It's amazing how much information we can glean from these visualizations, even without having perused either opinion. If anybody has thoughts on this, I'd be keen to hear them.

Posted by Maya Sen at 1:16 PM

January 18, 2010

why academics are so liberal

I read in this morning's New York Times about research being conducted by two sociologists, Neil Gross (British Columbia) and Ethan Fosse (Harvard), on why academics tend to be left of center. That professors are more liberal than non-academics is a pretty well-known fact; at the same time, we don't have a good idea as to why this is. Previous research on this point has largely relied on anecdotal or qualitative techniques, so Gross and Fosse's paper, which relies on survey data, looks promising. A copy of the working paper is here.

The paper uses data from the General Social Survey pooled over time (1974-2008, n = 325, once observations with missing outcomes were removed), where the dependent variable is a respondent's self-described ideological orientation on a seven-point scale.

The technique the authors use to test various hypotheses explaining the ideology gap is the Blinder-Oaxaca decomposition, which was developed by labor economists to estimate the role of individual predictors in observed divisions. For example, you could use the technique to try to gain traction on the factors that drive the wage gap between men and women. In this case, the authors used the technique to figure out the role of different variables -- religion, parents' education, tolerance, verbal skills, having lived in a foreign country, etc. -- in the ideological gap between professorial and non-professorial populations. (Note: In the interest of full disclosure, this technique is new to me.)

In terms of the ideology gap, it appears that having a graduate degree, being generally tolerant of people different than yourself, and lacking a strong religious affiliation are some of the factors most strongly explaining ideological differences between academics and non-academics. In fact, these variables explain roughly half of the observed gap. (Of course, this does not rule out that some confounder could explain all of these factors, as well as self-described ideology.)

Gross & Fosse go on to posit that a professorial career has developed over time a liberal reputation so that liberal people are more likely to be drawn to it. Their results do not seem to provide direct support for this, a fact they acknowledge; nonetheless, their research is interesting and is drawing some attention.

My take-away is that there still seems to be a lot of room for quantitative research on this navel-gazing question. If people have thoughts, I'd be keen to hear them.

Posted by Maya Sen at 9:55 AM

January 15, 2010

secrets of rating success

Complementing Matt's post about TV-watching patterns below, here's an interesting article from The Guardian about how three British computer scientists are using content analysis techniques to parse out what makes (or does not make) a hit TV script.

Posted by Maya Sen at 9:02 AM

December 25, 2009

a cautionary christmas tale

Merry Christmas, everyone!

I was amused to read about the hoopla involving this online "study" in the BMJ entitled "Santa Claus: A Public Health Pariah." The tongue-in-cheek article, written by Australian epidiomiologist Nathan J. Grills, contends that the corpulent Mr. Claus and his "rotund sedentary image" set a bad example for kids and adults alike.

The joke has apparently been lost on media organizations, bloggers, and radio personalities, who have suggested that the article is just another example of of politically correct, killjoy academic research. Grills has subsequently received harassing emails, and he was called a "Scrooge" by a blogger for the Atlanta Journal-Constitution.

Yikes!

Grills has since affirmed that he is, in fact, a "Santa believer and lover" and that he has even "donned the red and white garb a number of times to bring cheer at school concerts in rural Victoria." "To clarify," he said, "I am not a Santa researcher - the article was written in my spare time for a bit of comic relief."

More about the media hoopla surrounding Grills's article is here and here.

Posted by Maya Sen at 10:54 AM

December 19, 2009

competition and the N-effect

I've taken my fair share of standardized tests, so I sat up and took notice of a recent study published in Psychological Science about what happens to SAT scores when students take the test in crowded versus empty rooms. What the two researchers, Stephen Garica (UMich) and Avishalom Tor (Haifa University), are out to study is the impact that increased competition can have on one's performance. The results of this line of research are pretty interesting.

Here's how they did it. First, Garcia & Tor took state-by-state mean SAT scores as their outcome of interest. (This unfortunately restricts their analysis to an n of 50, somewhat ironic in an article about the "N effect.") Second, to measure "competition," Garcia & Tor created a "density" variable by dividing the number of test-takers by the number of state testing venues. This, they contend, captures the likelihood that a student would be sitting in a jam-packed classroom (teeming with potential competitors) or in a relatively empty competition-free environment. Third, Garcia & Tor controlled for a host of potential confounders, among these state funding for elementary and secondary education, per capita income, population density, the percent of students taking the SAT, the percent of test-takers reporting having a college-educated parent, and the percent of test-takers self-identifying as a racial minority.

The results suggest that the denser a test-taking environment is, the lower the state mean SAT scores are. This, the authors contend, is evidence in favor of the idea that as the number of potential competitors increases, one's motivation to compete dwindles.

I'm actually not 100% convinced that the two researchers have controlled for all possible confounding variables. Teacher quality, student-to-teacher ratios, strength of parental involvement, spending per student -- to name just a few -- could be factors that would affect both the probability of treatment (density of the venue) as well as the potential outcome (SAT scores). In addition, the coarseness of the measurements (which are all at the state level) makes it impossible to include confounders and other interesting variables at the city or student level. For example you would think that ambitious parents might encourage their children to take exams in a more comfortable (more rural) setting; this parental ambition would also in turn translate into higher SAT scores. When you measure things at the state level, however, you make it difficult to examine things like this.

The Psychological Sciences paper is actually more of a compendium of studies undertaken by Garcia & Tor on the topic. Other studies are more persuasive. For example, the authors administered a timed online test to a group of Michigan undergraduates. A subset of the group were told that they were competing against ten other students and that the quickest and most accurate 20% would get a $5 prize. The other subset were told that they were competing against 100 students and that, similarly, the quickest and most accurate 20% would get a $5 prize. What happened? The group competing against 10 students finished the online test faster than the other group (although note that there was no difference in the accuracy of the two groups).

This whole N-effect is interesting to think about -- are we more likely to sell ourselves short when it appears that we're facing stiff competition? In my experience, this seems true. (Although, under this rationale, my own best performance would have been on the GRE, where I sat alone at a computer while taking the test; sufficed to say, it wasn't a very strong showing.)

I'd also be interested in how this line of research contradicts or complements the social networks type of research being conducted by James Fowler, Nicholas Christakis, and the like. I am guessing here, but those folks would maybe counter that it's not competition that makes people perform more poorly -- rather, one might think that it's the camaraderie of being in a small group (a "we're-all-in-this-together" kind of attitude) that could positively influence people working in more intimate environments.

I imagine that others have more experience with this kind of research, and I would be interested in hearing thoughts on this.

Posted by Maya Sen at 2:17 PM

December 7, 2009

trends in municipal data availability

I read with some interest a recent NYTimes article about how cities are increasingly making public reams of municipal data. Basically, what the article noted was an increasing trend among US municipalities in making public data available and easy to digest. Among the cities taking the lead are San Francisco (with its DataSF website), New York (Data Mine), Washington DC (D.C. Data Catalog). The Federal Government has long hosted its own data site, Data.gov.

This trend doesn't seem to be limited to just US governments.

Over in the UK, Gordon Brown's government is hard at work on a new data site, data.gov.uk, which it hopes to launch early in 2010. In fact, the prime minister just today delivered a speech in which he extolled the virtues of data availability:

Releasing data can and must unleash the innovation and entrepreneurship at which Britain excels - one of the most powerful forces of change we can harness.

When, for example, figures on London's most dangerous roads for cyclists were published, an online map detailing where accidents happened was produced almost immediately to help cyclists avoid blackspots and reduce the numbers injured.

And after data on dentists went live, an iphone application was created to show people where the nearest surgery was to their current location.

And from April next year ordnance survey will open up information about administrative boundaries, postcode areas and mid-scale mapping.

All of this will be available for free commercial re-use, enabling people for the first time to take the material and easily turn it into applications, like fix my street or the postcode paper

For social scientists, having access to more data is never a bad thing. But, more importantly, perhaps having access to this otherwise mundane data will lessen our dependence on (notoriously unreliable) public opinion surveys. Instead of asking people how much they feel crime is affecting their particular neighborhood, we could measure it using the data provided by DataSF, data.gov.uk, and others. Instead of asking people how reliable or safe are their local hospitals, we'll be able to measure it using the same resources.

My point here is that it's often more useful for social scientists to see how people actually behave rather than to ask people how they say they will behave.

Of course, all this depends on having access to data at its rawest form. From just the quick look I did at some of the websites, I saw a lot of data in processed form (for example, available only through iPhone apps or through summary statistics in PDF form). This kind of processing makes things more accessible to the casual data consumer, but vastly less useful for the social scientist ready and willing to do her own data analysis.

The other thing, too, is that it will be interesting to see how governments, private companies, and academic institutions work together (or fail to work together) to make data available. Will Google step in to provide a search engine to search these databases? Will governments make their data available on something like IQSS's Dataverse? In general, what's the best way to make data available both to researchers and to the public?

It seems like an exciting time for data availability. If folks have other thoughts on this -- or leads or tips on other municipalities or governments increasingly making their data available -- I'd been keen to hear them.

Posted by Maya Sen at 12:42 PM

November 26, 2009

(not) growing up to be a lawyer

I went to law school before I ended up as a graduate student, so I read with some interest a recent essay by Vanderbilt Law Professor Herwig Schlunk entitled "Mamas, don't let your babies grow up to be...lawyers" (an online version is at the Wall Street Journal's Law Blog).

Maybe the title gives it all away, but the gist is that a legal education might not always pay off. While I wholeheartedly agree with this, I'm less enthusiastic about the author's methodology. The author essentially constructs three hypothetical law students: "Also Ran," a legal slacker who attends a lower-ranked law school; "Solid Performer," a middling kind of person who attends a middling kind of law school; and "Hot Prospect," a high-flying and well-placed law student. The essay then more or less "follows" them through their legal "careers" to see if their discounted expected gains in salary match what they "paid" in terms of opportunity costs, tuition, and interest on their student loans. (I know, I'm using a lot of air quotes here.) Unsurprisingly, a legal education isn't a very good investment for any of the individuals.

What's interesting about the paper is that it's essentially an exercise in counterfactuals -- what Also Ran would have earned after going to law school, what Hot Prospect would have made had she not gone, etc., etc. To that extent, it's very fun think about. But, on the flip side, that's kind of what it is -- a thought experiment. Maybe an interesting extension would be a do an empirical causal analysis -- maybe matching pre-law undergraduates along a slew of covariates and then seeing how the "treatment" of law school affects or does not affect their earnings. I'd certainly find that a lot more persuasive (although I imagine that the kind of data that you'd need to pull this off would be well-nigh impossible to collect).

The article has gotten quite a bit of attention from legal types, including from the WSJ's Law Blog and from the NYTimes Economix blog. Thoughts?

Posted by Maya Sen at 10:58 AM

breast cancer, rare diseases, and bayes rule, revised

Happy Thanksgiving!

Last Thursday, I posted about the recent government recommendations regarding breast cancer screening in women ages 40-49. At least one of you wrote me to say that one of my calculations might have been slightly off (they were), and so I did some more investigation on this issue, as well as on new recommendations on cervical pap smears. (Sorry --it took
me a few days to get around to all of this!)

To back up a second, here's what the controversial new recommendations (made by the US Preventative Services Task Force) say:

  • The USPSTF recommends against routine screening mammography in women aged 40 to 49 years. The decision to start regular, biennial screening mammography before the age of 50 years should be an individual one and take patient context into account, including the patient's values regarding specific benefits and harms.
  • The USPSTF recommends biennial screening mammography for women aged 50 to 74 years.
  • The USPSTF concludes that the current evidence is insufficient to assess the additional benefits and harms of screening mammography in women 75 years or older.
  • The USPSTF recommends against teaching breast self-examination (BSE).
As I noted in my original post, these recommendations really upend existing wisdom. For years, women have been told that routine self-examinations -- followed by regular mammograms past age 40 -- are the best way to counter breast cancer. In fact, women as young as 18 are regularly given directions on how to do self-exams, and it's not unusual for women in their 30s to undergo mammograms.

So all of this got me thinking that this could maybe be a straightforward application of the "rare diseases" example of Bayes' Rule (which many people see in their first probability course). I did some (more) digging around in one of the government reports, and here's
how the probabilities break down:

  • Experts estimate that 12.3% of women will develop the more invasive form of breast cancer in their lifetimes (p. 3).
  • Second, the probability of a woman developing breast cancer ages 40-49 is 1 in 69, but increases as she gets older. In women ages 50-59 it's 1 in 38 (p. 3).
  • Third, breast cancer is also more difficult to detect accurately in younger women -- likely because their breast tissue is more dense and fibrous. Experts estimate that there is a false-positive rate of 97.8 per 1,000 screened for women in their 40s, but a 86.6 per 1,000 rate for women in their 50s (Table 7).
  • Fourth, there is a false-negative rate of 1.0 per 1,000 screened for women their 40s, and a 1.1 per 1,000 rate for women in their 50s. (This last part is what I goofed up in my last post; thanks to those who caught it! It's also in Table 7 of the report, although, as we see later, it doesn't change the probabilities by that much.)

Now bear with me while I go through the mechanics of Bayes' Rule. For women in their 40s, here are the pertinent probabilities:

P(cancer) = 1/69
P(no cancer) = 1-1/69 = 68/69
P(positive|no cancer) = 97.8/1000
P(negative|no cancer) = 1 - 97.8/1000 = 902.2/1000
P(negative|cancer) =1/1000
P(positive|cancer) = 1 - 1/1000 = 999/1000

And for women in their 50s:

P(cancer) = 1/38
P(no cancer) = 1-1/38 = 37/38
P(positive|no cancer) = 86.6/1000
P(negative|no cancer) = 1 - 86.6/1000 = 913.4/1000
P(negative|cancer) =1.1/1000
P(positive|cancer) = 1 - 1.1/1000 = 998.9/1000

The probability we are interested is the probability of cancer given that a woman has tested positive, P(cancer|positive). Using Bayes' Rule:

P(cancer|positive) = P(positive|cancer)*P(cancer)/P(positive)
P(cancer|positive) = P(positive|cancer)*P(cancer)/P(positive|no
cancer)P(no cancer)+P(positive|cancer)P(cancer)

We now have all of the moving parts. Let's first look at a woman in her 40s:

P(cancer|positive) = (999/1000*1/69)/(97.8/1000*68/69+999/1000*1/69)
= 0.1305985

and for a woman in her 50s:

P(cancer|positive) = (998.9/1000*1/38)/(86.6/1000*37/38+998.9/1000*1/38)
0.2376579

All this very simple analysis suggests is that mammograms do appear to be a less reliable test for younger women. Whether these recommendations make sense is another matter. Insurance companies might use these recommendations as an excuse to deny coverage for women with higher than average risk. In addition, as some of you noted, a 13% risk is nothing to sneeze at, and it's much higher than the 1/69 rate for women in their 40s (though comparable to a woman's lifetime 12% risk). Lastly, I also refer folks to Andrew Thomas's post , where he discusses the metrics used by the task force and notes that the confidence interval for women in their 40s lies completely within the confidence interval for women in their 50s.

I also did some very brief investigation regarding the new cervical cancer guidelines. For those of you unfamiliar with this story, the American College of Obstetricians and Gynecologists recently issued recommendations that women up to the age of 21 no longer receive pap test and that older women receive paps less often -- also advice contrary to what women have been told for decades.

It was much harder to pinpoint the false positive and false negative rates involved with pap tests (a lot of medical jargon, different levels of detection, and human and non-human error made things confusing). I did manage to find this article in the NEJM. The researchers there looked at women ages 30 to 69 (a different subgroup, unfortunately, from the under 21 group), but they do report that the sensitivity of Pap testing was 55.4% and the specificity was 96.8%. This corresponds to a false negative rate somewhere around 44.6% and a false positive rate somewhere around 3.2%. (Other references I've seen elsewhere hint that the false negative rate could be anywhere from 15 to 40%, depending on the quality of the lab and the cells collected.)

The other thing to note is that cervical cancer is very rare in young women and, unlike forms of cancer, it grows relatively slowly. According to the New York Times, 1-2 cases occur per 1,000,000 girls ages 15 to 19. This, combined with the high false negative rates, resulted in the ACOG recommendations.

My sense is that the ACOG recommendations are on more solid footing, but if people have comments, I'd be keen to hear them.

Posted by Maya Sen at 10:03 AM