September 2008
Sun Mon Tue Wed Thu Fri Sat
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30        

Authors' Committee

Chair:

Matt Blackwell (Gov)

Members:

Martin Andersen (HealthPol)
Kevin Bartz (Stats)
Deirdre Bloome (Social Policy)
John Graves (HealthPol)
Rich Nielsen (Gov)
Maya Sen (Gov)
Gary King (Gov)

Weekly Research Workshop Sponsors

Alberto Abadie, Lee Fleming, Adam Glynn, Guido Imbens, Gary King, Arthur Spirling, Jamie Robins, Don Rubin, Chris Winship

Weekly Workshop Schedule

Recent Comments

Recent Entries

Categories

Blogroll

SMR Blog
Brad DeLong
Cognitive Daily
Complexity & Social Networks
Developing Intelligence
EconLog
The Education Wonks
Empirical Legal Studies
Free Exchange
Freakonomics
Health Care Economist
Junk Charts
Language Log
Law & Econ Prof Blog
Machine Learning (Theory)
Marginal Revolution
Mixing Memory
Mystery Pollster
New Economist
Political Arithmetik
Political Science Methods
Pure Pedantry
Science & Law Blog
Simon Jackman
Social Science++
Statistical modeling, causal inference, and social science

Archives

Notification

Powered by
Movable Type 4.24-en


« Pigskin and Politics | Main | Call for papers: the Midwest Poli Sci conference gets interdisciplinary »

15 September 2008

Are journal editors biased against certain kinds of authors?

In a working paper entitled "Can We Test for Bias in Scientific Peer Review?", Andrew Oswald proposes a method of detecting whether journal editors (and the peer review process generally, I suppose) discriminate against certain kinds of authors. His approach, in a nutshell, is to look for discrepancies between the editor's comparison of two papers and how those papers were ultimately compared by the scholarly community (based on citations). In tests he runs on two high-ranking American economics journals, he doesn't find a bias by QJE editors against authors from England or Europe (or in favor of Harvard authors), but he does find that JPE editors appear to discriminate against their Chicago colleagues.

While publication politics is of course interesting to me and other academics, I bring up this paper not so much for the results as for the technique. Since the most important decision an editor makes is whether to publish an article or not, the obvious way of trying to determine whether editors are biased would be to look at that decision -- perhaps look at whether editors are more likely to reject articles by a certain type of author, controlling for article quality. One could imagine a controlled experiment of this type, but otherwise this is an unworkable design: there is no good general way to "control for quality," and at any rate the record of what was submitted where would be impossible to piece together. Oswald's design neatly addresses both of these problems. Instead of looking at the untraceable accept/reject decision, he looks at the decision to accept two articles and place them next to each other in an issue; not only does this convey information about the editor's judgment of relative quality of articles, but it means that citations on those articles provide a plausible comparison of the quality of those articles, uncomplicated by differences in the relative impact of different journals.

Oswald's approach of course rests on the assumption that citations provide an unbiased measure of the quality of a paper (at least relative to other papers published in the same volume), which is probably not true: any bias we might expect among journal editors would likely be common among scholars as a whole and would thus be reflected in citations. Oswald's test therefore really compares the bias of editors against the bias of the scholarly community as a whole: if everyone is biased in the same way, the test would never be able to reject the null hypothesis of no bias.

This kind of issue aside, it seems like this general approach could be useful in other settings where we want to assess whether some selection process is biased. I haven't had much of a chance to think about it -- anyone have any suggestions where this kind of approach could be, or has been, applied to other topics?

Posted by Andy Eggers at September 15, 2008 8:13 AM