January 2006
Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31        

Authors' Committee


Matt Blackwell (Gov)


Martin Andersen (HealthPol)
Kevin Bartz (Stats)
Deirdre Bloome (Social Policy)
John Graves (HealthPol)
Rich Nielsen (Gov)
Maya Sen (Gov)
Gary King (Gov)

Weekly Research Workshop Sponsors

Alberto Abadie, Lee Fleming, Adam Glynn, Guido Imbens, Gary King, Arthur Spirling, Jamie Robins, Don Rubin, Chris Winship

Weekly Workshop Schedule

Recent Comments

Recent Entries



SMR Blog
Brad DeLong
Cognitive Daily
Complexity & Social Networks
Developing Intelligence
The Education Wonks
Empirical Legal Studies
Free Exchange
Health Care Economist
Junk Charts
Language Log
Law & Econ Prof Blog
Machine Learning (Theory)
Marginal Revolution
Mixing Memory
Mystery Pollster
New Economist
Political Arithmetik
Political Science Methods
Pure Pedantry
Science & Law Blog
Simon Jackman
Social Science++
Statistical modeling, causal inference, and social science



Powered by
Movable Type 4.24-en

« January 17, 2006 | Main | January 19, 2006 »

18 January 2006

Social Science as Consulting

Mike Kellermann

Regular visitors to this blog have read (here, here, and here) about the recent field research conducted by Mike Hiscox and Nick Smyth of the Government Department on consumer demand for labor standards. After they described the difficulties that they faced in convincing retailers to participate in their experiments, several workshop participants remarked that the retailers should be paying them for the market research done on their behalf. Indeed, bringing rigorous experimental design to bear in such cases should be worth at least as much to corporations as the advice that they receive from consulting firms - and all we want is their data, not their money!

This discussion reminded me of an Applied Statistics talk last year given by Sendhil Mullainathan of the Harvard Economics Department on randomization in the field. He argues that there are many more opportunities for field experiments than we typically assume in the social sciences. One of the projects that he described during the talk was a field experiment in South Africa, in which a lender (unidentified for reasons that should become clear) agreed to several manipulations of its standard letter offering credit to relatively low-income, high-risk consumers. These manipulations included both economic (varying the interest rate offered) and psychological (altering the presentation of the interest rate through frames and cues of various sorts) treatments. Among the remarkable things about this experiment is the sheer number of subjects - over 50,000 individuals (all of whom had previously borrowed from the lender) received letters. It is hard to imagine a field experiment of this magnitude funded by an academic institution. Of course, the motives of the lender in this case had little to do with scientific progress; it hoped that the lessons learned from the experiment would help the bottom line. The results from the experiment suggest that relatively minor changes in presentation dramatically affected the take-up rate of loans. As one example, displaying a single example loan amount (instead of several possible amounts) increased demand by nine percent.

So, the question is why don't we do more of these kinds of experiments? One answer is obvious; social science is not consulting. The whole project of social science depends on our ability to share results with other researchers, something unlikely to please companies that would otherwise love to have the information. Unfortunately, in many cases, paying social scientists in data is probably more expensive than paying consultants in dollars.

Posted by Mike Kellermann at 12:44 AM