12 November 2012
Let me offer a short reflection on the apparent triumph of the professional pollsters over their critics in 2012. To briefly recap, in the weeks leading up to the election, there was a fairly energetic critique from the right that there was a systematic bias of mainstream media surveys to over-represent Democratic constituencies in their samples. This critique seemed to be authentically held--not simply spin to keep hope alive for prospective Republican voters-- as reflected by the stunned reactions of commentators on Fox, as well as the apparent surprise of the Romney campaign of the outcome. (As one Romney advisor stated, "I don't think there was one person who saw this coming.")
There are two striking things to observe about this moment. The first is how good a job professional pollsters did, and the second is how robust the social consensus was on the right that Romney was going to win.
First, on average, professional pollsters were remarkably on target (if just slightly biased against Obama)--ultimately both at the state level and nationally. I should note that this was a particularly predictable election--if one had simply said the 2012 map would look exactly the same as 2008, you would have a had hit rate of 96%, and if you had expected that Obama would do a bit worse than 2008 (say, subtract 2 points across the board), 100%. At another level, this pattern is irrelevant--since this was not information that fed into the polls. What is important is how well pollsters did in the face of increased obstacles to doing a good job: response rates to surveys have plummeted, and increasing numbers of individuals rely exclusively on (hard to reach) mobile phones. Despite these challenges, in aggregate surveys are more accurate than ever, almost spot on in 2012.
How is this possible? This is worth far more reflection than a blog entry can offer, because not all communities face challenges like these so effectively. How does this community channel inherently flawed human judgments in a fashion that they are, on average, right? There are surely lessons to be learned about the construction of knowledge and professional practices in a way that has turned out to be quite functional collectively. Here I will simply speculate that it reflects three things. The first is that there is real world feedback as to the effectiveness of methods to address these challenges. This learning, in turn, becomes embodied in a set of practices, which will, in turn be incrementally changed by future experiences. Further, none of these challenges have come abruptly, allowing an iterative process of learning how to adapt and projection of lessons to future surveys where these problems would have gotten a bit worse. Second, there is a natural competition among survey firms to be accurate, and thus the motivation to take these lessons to heart. This is a decentralized process of both Darwinian selection and purposive adaptation. I strongly suspect that there will be serious re-evaluation of likely voter models at Gallup, for example, which had a notably poor performance for the second election in a row. Third, there is a collective process of sifting through best practices. While there is certainly some desire to keep the secrets to success private, in fact there is a certain necessary degree of transparency in methods; and this is a small world of professional friendships where knowledge is semi-permeable, allowing a certain degree of local innovation providing short run advantage, while allowing good practices to disseminate. That is, there may be (as I have written about elsewhere) a good balance between exploration (development of new solutions) and exploitation (taking advantage of what is known to work) in this system.
The system of pollsters might be contrasted with that of pundits. Do you expect a Darwinian culling of the right leaning pundits who missed the outcome? The answer is surely not. Nor will there be an adjustment of practices on the part of pundits who largely served up a mix of anecdotal pablum to their readers.
All of this to not to say that there might not come a time where the community of pollsters converges on the wrong answers, or that the challenges in the future will be such that there are no good answers. The data, however, do not suggest that such a moment is imminent.
And how did the right get it so wrong? How could the Romney campaign of successful political professionals, in part embedded in the same epistemic community as the broader set of pollsters, not have seen an Obama victory as a plausible (put aside likely) outcome? This was not a near miss on their part. Consider: at last count, you could have subtracted 4.7 points (!) from Obama's margin in every state and he would still have won (the electoral college, not the popular vote). Romney's campaign, and many commentators on the right, were living in a parallel world, one with fewer minority and young voters than in ours. Again, I don't know the answer to this question. Likely key ingredients: an authentic ambiguity in how to handle the aforementioned challenges; a strong desire to see a Romney victory; an informational ecosystem today that provides the opportunity for producing plausible sounding arguments to rationalize any wishful thoughts one might have; and the relevant subcommunity was small, centralized, and deferential enough so that a few opinion leaders could trigger a bandwagon. The result was a madness of the crowd, as Mackay coined 180 years ago. The consensus in the Romney campaign may also have reflected the certainty of the candidate of victory (as reflected in the apparent lack of a drafting of a concession speech), which may have discouraged the articulation of any dissenting perspectives regarding the state of the campaign.
Posted by David Lazer at November 12, 2012 7:43 PM