September 2009
Sun Mon Tue Wed Thu Fri Sat
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30      

Authors' Committee

Chair:

Matt Blackwell (Gov)

Members:

Martin Andersen (HealthPol)
Kevin Bartz (Stats)
Deirdre Bloome (Social Policy)
John Graves (HealthPol)
Rich Nielsen (Gov)
Maya Sen (Gov)
Gary King (Gov)

Weekly Research Workshop Sponsors

Alberto Abadie, Lee Fleming, Adam Glynn, Guido Imbens, Gary King, Arthur Spirling, Jamie Robins, Don Rubin, Chris Winship

Weekly Workshop Schedule

Recent Comments

Recent Entries

Categories

Blogroll

SMR Blog
Brad DeLong
Cognitive Daily
Complexity & Social Networks
Developing Intelligence
EconLog
The Education Wonks
Empirical Legal Studies
Free Exchange
Freakonomics
Health Care Economist
Junk Charts
Language Log
Law & Econ Prof Blog
Machine Learning (Theory)
Marginal Revolution
Mixing Memory
Mystery Pollster
New Economist
Political Arithmetik
Political Science Methods
Pure Pedantry
Science & Law Blog
Simon Jackman
Social Science++
Statistical modeling, causal inference, and social science

Archives

Notification

Powered by
Movable Type 4.24-en


« August 2009 | Main | October 2009 »

29 September 2009

Athey on "Sponsored Search Advertising Auctions"

Please join us at the Applied Statistics workshop this Wednesday, Sept 30th when we will be delighted to have the distinguished Susan Athey, Professor of Economics here at Harvard, presenting on "A Structural Model of Equilibrium and Uncertainty in Sponsored Search Advertising Auctions" (joint work with Denis Nekipelov). Susan has passed along the following abstract:


Sponsored links that appear beside internet search results on the major search engines are sold using real-time auctions, where advertisers place standing bids that are entered in an auction each time a user types in a search query. The ranking of advertisements and the prices paid depend on advertiser bids as well as "quality scores" that are assigned for each advertisement and user query. Existing models assume that bids are customized for a single user query and the associated quality scores; however, in practice that is impossible, as queries arrive more quickly than advertisers can change their bids, and advertisers cannot perfectly predict changes in quality scores. This paper develops a new model where bids apply to many user queries, while the quality scores and the set of competing advertisements may vary from query to query. In contrast to existing models that ignore uncertainty, which produce multiplicity of equilibria, we provide sufficient conditions for existence and uniqueness of equilibria, and we provide evidence that these conditions are satisfied empirically. We show that the necessary conditions for equilibrium bids can be expressed as an ordinary differential equation.
We then propose a structural econometric model. With sufficient uncertainty in the environment, the valuations are point-identified, otherwise, we propose a bounds approach. We develop an estimator for bidder valuations, which we show is consistent and asymptotically normal. We provide Monte Carlo analysis to assess the small sample properties of the estimator. We also develop a tractable computational approach to calculate counterfactual equilibria of the auctions.
Finally, we apply the model to historical data for several keywords. We show that our model yields lower implied valuations and bidder profits than approaches that ignore uncertainty. We find that bidders have substantial strategic incentives to reduce their expressed demand in order to reduce the unit prices they pay in the auctions, and in addition, these incentives are asymmetric across bidders, leading to inefficient allocation. We show that for the keywords we study, the auction mechanism used in practice is not only strictly less efficient than a Vickrey auction, but it also raises less revenue.

The Applied Statistics workshop meets each Wednesday in room K-354, CGIS-Knafel (1737 Cambridge St). We start at 12 noon with a light lunch, with presentations beginning around 12:15 and we usually wrap up around 1:30 pm. We hope you can make it.

Posted by Matt Blackwell at 10:47 AM

23 September 2009

The placebo effect is growing?

Wired has a fascinating article about the placebo effect and how the pharmaceutical companies deal with it. Not only is there evidence that the placebo effect is growing (some drugs approved in the 80s and 90s would struggle to pass the FDA now), but it turns out there may be significant geographic differences in the strength of the effect:

Assumption number one was that if a trial were managed correctly, a medication would perform as well or badly in a Phoenix hospital as in a Bangalore clinic. Potter discovered, however, that geographic location alone could determine whether a drug bested placebo or crossed the futility boundary. By the late '90s, for example, the classic antianxiety drug diazepam (also known as Valium) was still beating placebo in France and Belgium. But when the drug was tested in the US, it was likely to fail. Conversely, Prozac performed better in America than it did in western Europe and South Africa. It was an unsettling prospect: FDA approval could hinge on where the company chose to conduct a trial.

I'm not sure how you separate out the geographic confounding of the drug response versus the geographic confounding of the placebo response when looking at differences between the two, but it is interesting nonetheless.

(via kottke)

UPDATE: I just wanted to clarify why I thought this article was interesting so that folks do not think that I believe all the analysis contained in the article. The "effect" of the placebo treatment is clearly nonsensical as effects always need to about comparisons. What is identified from a clinical trial is the difference between the placebo response and the treatment response. My interpretation of the article (which is different than the author's interpretation) is that there is a lot of variation in that difference, both over time and over geography within the same drug. Since I have not read the academic articles that inform the article, I'm not sure if this variation is about what we would expect or not giving sampling variation, but the possibility of a systematic relationship is intriguing.

As Kevin notes in the comments below, there are some that are criticizing the article. It took a bit of searching (not that simple!), but I found a good response:

http://scienceblogs.com/whitecoatunderground/2009/09/placebo_is_not_what_you_think.php

The author of the response simply claims that variation in the placebo response is simply sampling variance.

Posted by Matt Blackwell at 10:04 AM

21 September 2009

Van Alstyne on "Network Structure and Information Advantage"

Please join us this Wednesday, September 23rd at the Applied Statistics Workshop when we will be fortunate to have Marshall Van Alstyne presenting "Network Structure and Information Advantage: The Diversity--Bandwidth Tradeoff." Marshall is an Associate Professor at Boston University in the Department of Management Information Systems as well as Research Associate at MIT's Center for E-Business. Marshall passed along the following abstract:

To get novel information, we propose that actors in brokerage positions face a tradeoff between network diversity and communication channel bandwidth. As the structural diversity of a network increases, the bandwidth of communication channels in that network decreases, creating countervailing effects on the receipt of novel information. This argument is based on the observation that diverse networks are typically made up of weaker ties, characterized by narrower communication channels across which less diverse information is likely to flow. The diversity-bandwidth tradeoff is moderated by (a) the degree to which topics are uniformly or heterogeneously distributed over the alters in a broker's network, (b) the dimensionality of the information in a broker's network (whether the total number of topics communicated by alters is large or small) and (c) the rate at which the information possessed by a broker's contacts refreshes or changes over time. We test this theory by combining social network and performance data with direct observation of information content flowing through email channels at a medium sized executive recruiting firm. These analyses unpack the mechanisms that enable information advantages in networks and serve as a 'proof-of-concept' for using email content data to analyze relationships among information flows, networks, and social capital.

A copy of the paper is also available.

The Applied Statistics workshop meets each Wednesday in room K-354, CGIS-Knafel (1737 Cambridge St). We start at 12 noon with a light lunch, with presentations beginning around 12:15 and we usually wrap up around 1:30 pm. We hope you can make it.

Posted by Matt Blackwell at 10:26 AM

15 September 2009

Goodrich on "Bringing Rank-Minimization Back In"

Please join us tomorrow, September 16th when we are excited to have Ben Goodrich (Government/Social Policy) presenting "Bringing Rank-Minimization Back In: An Estimator of the Number of Inputs to a Data-Generating Process," for which Ben has provided the following abstract:

This paper derives and implements an algorithm to infer the number of inputs to a data-generating process from the outputs. Previous working dating back to the 1930s proves that this inference can be made in theory, but the practical difficulties have been too daunting to overcome. These obstacles can be avoided by looking at the problem from a different perspective, utilizing some insights from the study of economic inequality, and relying on modern computer technology.

Now that there is a computational algorithm that can estimate the number of variables that generated observed outcomes, the scope for applications is quite large. Examples are given showing its use for evaluating the reliability of measures of theoretical concepts, empirically testing formal models, verifying whether there is an omitted variable in a regression, checking whether proposed explanatory variables are measured without error, evaluating the completeness of multiple imputation models for missing data, and facilitating the construction of matched pairs in randomized experiments. The algorithm is used to test the main hypothesis in
Esping-Andersen (1990), which has been influential in the political economy literature, namely that various welfare-state outcomes are a function of only three underlying variables.

The Applied Statistics workshop meets each Wednesday in room K-354, CGIS-Knafel (1737 Cambridge St). We start at 12 noon with a light lunch, with presentations beginning around 12:15 and we usually wrap up around 1:30 pm.

We hope you can make it.

Posted by Matt Blackwell at 10:47 AM

8 September 2009

Grimmer on "Quantitative Discovery from Qualitative Information"


Please join us tomorrow, September 9th for our first workshop of the year when we are happy to have Justin Grimmer presenting joint work with Gary King entitled "Quantitative Discovery from Qualitative Information: A General-Purpose Document Clustering Methodology."

Justin and Gary have provided the following abstract for their paper:

Many people attempt to discover useful information by reading large quantities of unstructured text, but because of known human limitations even experts are ill-suited to succeed at this task. This difficulty has inspired the creation of numerous automated cluster analysis methods to aid discovery. We address two problems that plague this literature. First, the optimal use of any one of these methods requires that it be applied only to a specific substantive area, but the best area for each method is rarely discussed and usually unknowable ex ante. We tackle this problem with mathematical, statistical, and visualization tools that define a search space built from the solutions to all previously proposed cluster analysis methods (and any qualitative approaches one has time to include) and enable a user to explore it and quickly identify useful information. Second, in part because of the nature of unsupervised learning problems, cluster analysis methods are not routinely evaluated in ways that make them vulnerable to being proven suboptimal or less than useful in specific data types. We therefore propose new experimental designs for evaluating these methods. With such evaluation designs, we demonstrate that our computer-assisted approach facilitates more efficient and insightful discovery of useful information than either expert human coders using qualitative or quantitative approaches or existing automated methods. We (will) make available an easy-to-use software package that implements all our suggestions.

The Applied Statistics workshop meets each Wednesday in room K-354, CGIS-Knafel (1737 Cambridge St). We start at 12 noon with a light lunch, with presentations beginning around 1215 and we usually wrap up around 130 pm.

Posted by Matt Blackwell at 12:00 PM