October 2007
Sun Mon Tue Wed Thu Fri Sat
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31      

Authors' Committee

Chair:

Matt Blackwell (Gov)

Members:

Martin Andersen (HealthPol)
Kevin Bartz (Stats)
Deirdre Bloome (Social Policy)
John Graves (HealthPol)
Rich Nielsen (Gov)
Maya Sen (Gov)
Gary King (Gov)

Weekly Research Workshop Sponsors

Alberto Abadie, Lee Fleming, Adam Glynn, Guido Imbens, Gary King, Arthur Spirling, Jamie Robins, Don Rubin, Chris Winship

Weekly Workshop Schedule

Recent Comments

Recent Entries

Categories

Blogroll

SMR Blog
Brad DeLong
Cognitive Daily
Complexity & Social Networks
Developing Intelligence
EconLog
The Education Wonks
Empirical Legal Studies
Free Exchange
Freakonomics
Health Care Economist
Junk Charts
Language Log
Law & Econ Prof Blog
Machine Learning (Theory)
Marginal Revolution
Mixing Memory
Mystery Pollster
New Economist
Political Arithmetik
Political Science Methods
Pure Pedantry
Science & Law Blog
Simon Jackman
Social Science++
Statistical modeling, causal inference, and social science

Archives

Notification

Powered by
Movable Type 4.24-en


« October 1, 2007 | Main | October 3, 2007 »

2 October 2007

Applied Stats Workshop - Tom Cook

The Applied Statistics Workshop presents another installment this week with Thomas Cook, Department of Sociology, Northwestern University presenting a talk entitled, "When the causal estimates from randomized experiments and non-experiments coincide: Empirical findings from the within-study comparison literature." Here is an excerpt from the paper:

The present paper has several purposes. It seeks to up-date the literature since Glazerman et al. (2003) and Bloom et al. (2005) and to move it beyond its near exclusive focus on job training. We have examined the job training studies, and find nothing to challenge the past conclusions described above. However, the more recent studies allow us to broach three questions that are more finely differentiated than whether experiments and non-experiments produce comparable findings:

1. Do experiments and RDD studies produce comparable effect sizes? We have found three examples attempting this comparison.

2. Do comparable effect sizes result when the non-experiment depends on selecting one or more intact comparison groups that are deliberately matched on pretest measures of the posttest outcome, as recommended in Cook & Campbell (1979)? Thus, in a non-experiment with schools as the unit of assignment, intervention schools are carefully matched with intact non-intervention schools in the hope that the average treatment and comparison schools will not differ on pretest achievement, let us say, though they may differ on unobservables. We have found three studies with this focus.

3. Do experiments and non-experiments produce comparable effect sizes when the intervention and comparison units do differ at pretest and so statistical adjustments or individual matches are constructed to control for this demonstrated non-equivalence? This question has dominated the literature to date, and we found six studies outside of job training that asked this question

We will meet at 12 noon in CGIS-Knafel N354 and the talk will begin at 1215pm. And of course a delicious, free lunch will be provided.

Posted by Justin Grimmer at 1:19 AM