September 2010
Sun Mon Tue Wed Thu Fri Sat
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30    

Authors' Committee


Matt Blackwell (Gov)


Martin Andersen (HealthPol)
Kevin Bartz (Stats)
Deirdre Bloome (Social Policy)
John Graves (HealthPol)
Rich Nielsen (Gov)
Maya Sen (Gov)
Gary King (Gov)

Weekly Research Workshop Sponsors

Alberto Abadie, Lee Fleming, Adam Glynn, Guido Imbens, Gary King, Arthur Spirling, Jamie Robins, Don Rubin, Chris Winship

Weekly Workshop Schedule

Recent Comments

Recent Entries



SMR Blog
Brad DeLong
Cognitive Daily
Complexity & Social Networks
Developing Intelligence
The Education Wonks
Empirical Legal Studies
Free Exchange
Health Care Economist
Junk Charts
Language Log
Law & Econ Prof Blog
Machine Learning (Theory)
Marginal Revolution
Mixing Memory
Mystery Pollster
New Economist
Political Arithmetik
Political Science Methods
Pure Pedantry
Science & Law Blog
Simon Jackman
Social Science++
Statistical modeling, causal inference, and social science



Powered by
Movable Type 4.24-en

« August 2010 | Main | October 2010 »

21 September 2010

What is the likelihood function?

An interesting 1992 paper by Bayarri and DeGroot entitled “Difficulties and Ambiguities in the Definition of a Likelihood Function” (gated version) grapples with the problem of defining the likelihood when auxiliary variables are at hand. Here is the abstract:

The likelihood function plays a very important role in the development of both the theory and practice of statistics. It is somewhat surprising to realize that no general rigorous definition of a likelihood function seem to ever have been given. Through a series of examples it is argued that no such definition is possible, illustrating the difficulties and ambiguities encountered specially in situations involving “random variables” and “parameters” which are not of primary interest. The fundamental role of such auxiliary quantities (unfairly called “nuisance”) is highlighted and a very simple function is argued to convey all the information provided by the observations.

The example that resonates with me in on pages 4-6, where they describe the ambiguity of using defining the likelihood function when there is an observation y which is a measurement of x subject to (classical) error. There are several different ways of writing a likelihood in that case, depending on how you handle the latent, unobserved data x. One can condition on it, marginalize across it, or include it in the joint distribution of the data. Each of these can lead to a different MLE.

Their point is that situations like this involve subjective choices (though, all modeling requires subjective choice) and the hermetic seal between the “model” and the “prior” is less airtight than we think.

Posted by Matt Blackwell at 4:41 PM

14 September 2010

You are not so smart

You are not so smart is a blog dedicated to explaining self-delusions. The most recent post is on the Texas sharpshooter fallacy:

The Misconception: You take randomness into account when determining cause and effect.
The Truth: You tend to ignore random chance when the results seem meaningful or when you want a random event to have a meaningful cause.

Posted by Matt Blackwell at 6:33 PM