4 October 2007
I just read a particularly nice paper on social contagion, that just so elegantly addresses one of the key issues around whether social networks really have an impact on behavior—the direction of the causal arrow. This particular paper examined the “contagiousness of voting.” Does it affect you whether the people you know vote? Generally, it does turn out that there is a positive correlation between whether you vote and whether the people you hang out with vote. Well, the problem—as in many domains—is that this pattern could be reverse causation (i.e. that you hang out with people because they are equally likely to vote as you are—unlikely, but in some domains possible), or, more likely, in this case, that there are other factors (general civic values, SES, etc) that affect both who you are connected to and your behavior. So, how to figure out the causal arrow? The problem with studying social influence in the lab (with all due respect to Asch, etc) is that you can’t simulate a real relationship in a laboratory. What’s the alternative? David Nickerson came up with experimenting with real relationships. Nickerson conducted a field experiment, with a get out the vote (GOTV) treatment, aimed at detecting the contagion of the GOTV treatment through pre-existing relationships. This builds on other randomized field experiments (e.g., Gerber and Green 2000; Green, Gerber, and Nickerson 2003) that have demonstrated that GOTV campaigns actually do increase turnout significantly.
Nickerson took this one step further, to see if the stimulus of the GOTV campaign was contagious to other members of a household. The basic model was that you have a treatment group, which receives the GOTV pitch at the door (this is a door to door canvass), and a placebo group (which receives a pitch about recycling). Treatment and placebo are randomly allocated. By selection, each of these households has two voters.
Thus, we have Treatment/placebo → ego (person who answers the door) →? Alter (person who does not answer the door)
The key research question is whether the GOTV signal is somehow transmitted to the alter by ego. Given “atomistic” assumptions about people (i.e., no interdependence of behavior), even if there is an effect on ego, there should be no effect on alter.
So, what did Nickerson find? First, unsurprisingly, he did find a big effect on ego. In one site (Denver) turnout of the placebo group was 39.1% and of the treatment group 47.7%, and in the second site (Minneapolis) 16.2% vs 27.1%. What about the numbers for the alters? In Denver, the numbers were 36.9% vs 42.4%, and in Minneapolis 17.3% vs 23.6% (pooled one tailed sig < .02). In other words, the secondary effects were about 60% of the primary effects (and this does not measure other possible ripple effects).
A few minor critiques of the paper. First, while the main point is to demonstrate the treatment effect, it could do more to examine the pathway of the treatment effect. Is the secondary treatment completely the result of ego voting, thus increasing the probability of alter voting? I doubt it—it’s just too big an effect. Let’s say that 10% of the people who received the treatment voted because of the treatment. For some of those people their alters would have voted anyhow (less than the population mean, likely, but more than 0)—let’s say one in five. Further, let’s say that for some of the remaining 8% there were exogenous reasons why the alter could not vote—they were traveling, working all day, sick, etc—let’s say one in ten. That leaves about 7% who would not have voted, almost all of which now have to turn out by this interpretation. I am skeptical of a contagion effect of close to 100%
More likely: there are indirect effects even when there aren’t direct effects. In particular, I bet that the treatment increased turnout among alters in cases where the ego did not turn out. For example, imagine ego is traveling on election day, gets the GOTV pitch, reminds alter to vote. Ego cannot vote, but alter’s likelihood of voting may increase.
It would have been possible to get at this by looking at turnout rates of alters in cases where ego did or did not receive the treatment, split by whether ego did or did not turn out. This is not perfect, because households where ego did not turn out even after receiving the treatment are likely different than the equivalent households in the placebo condition (e.g., these may be the hard core nonvoting households). How well this could be finessed depends on whether there were other data on subjects (e.g., were there data on whether they had voted in the preceding election). In any case, I bet that it would have been possible to discern some of that pathway.
It also would have been useful to incorporate into the analysis data about ego and alters (again, to the extent possible). Most obviously, were there gender effects? Did it matter whether ego was male and alter female, or vice versa? Did the relative ages of ego and alter matter? For example, was contagion more likely between people the same age (one would guess married couples)? And for pairs where there was a generational difference (one would guess parent-child)—was contagion more likely from senior to junior or vice versa?
These points, I should note, simply would have been whipped cream on an awfully nice pie. And the basis for causal inference in these analyses (what I liked so much about this paper) might be weaker. But so what if there are some neat quasi-experimental results are combined with solid experimental findings? (There is also a more general lesson here about looking for more than just the treatment effect in experiments-- especially where there are possible moderators.)
I also have some concerns about whether the treatment might have spilled over onto the alters, depending on exactly how the canvassing was done. Surely there some cases out of the 486 treatment subjects where the alter was also at the door, or somehow perceptibly in the background. How were these cases handled? Were they discarded? Were they noted?
I strongly doubt that this last issue could greatly have affected the results; and the other analyses I have suggested would just be building on what is already a very nice finding. In any a strongly recommended read for those who are into contagion:
Gerber, Alan, and Donald Green. 2000. “The Effects of Canvassing, Telephone Calls, and Direct Mail on Voter Turnout: A Field Experiment.” American Political Science Review 94: 653-63.
Green, Donald, Alan Gerber, and David Nickerson. 2003. “Getting out the Vote in Local Elections: Results from Six Door-to-Door Canvassing Experiments.” Journal of Politics 65: 1083-96.
Posted by David Lazer at October 4, 2007 9:12 PM