18 January 2006
Regular visitors to this blog have read (here, here, and here) about the recent field research conducted by Mike Hiscox and Nick Smyth of the Government Department on consumer demand for labor standards. After they described the difficulties that they faced in convincing retailers to participate in their experiments, several workshop participants remarked that the retailers should be paying them for the market research done on their behalf. Indeed, bringing rigorous experimental design to bear in such cases should be worth at least as much to corporations as the advice that they receive from consulting firms - and all we want is their data, not their money!
This discussion reminded me of an Applied Statistics talk last year given by Sendhil Mullainathan of the Harvard Economics Department on randomization in the field. He argues that there are many more opportunities for field experiments than we typically assume in the social sciences. One of the projects that he described during the talk was a field experiment in South Africa, in which a lender (unidentified for reasons that should become clear) agreed to several manipulations of its standard letter offering credit to relatively low-income, high-risk consumers. These manipulations included both economic (varying the interest rate offered) and psychological (altering the presentation of the interest rate through frames and cues of various sorts) treatments. Among the remarkable things about this experiment is the sheer number of subjects - over 50,000 individuals (all of whom had previously borrowed from the lender) received letters. It is hard to imagine a field experiment of this magnitude funded by an academic institution. Of course, the motives of the lender in this case had little to do with scientific progress; it hoped that the lessons learned from the experiment would help the bottom line. The results from the experiment suggest that relatively minor changes in presentation dramatically affected the take-up rate of loans. As one example, displaying a single example loan amount (instead of several possible amounts) increased demand by nine percent.
So, the question is why don't we do more of these kinds of experiments? One answer is obvious; social science is not consulting. The whole project of social science depends on our ability to share results with other researchers, something unlikely to please companies that would otherwise love to have the information. Unfortunately, in many cases, paying social scientists in data is probably more expensive than paying consultants in dollars.