Cash for review: improving journal referee performance
Here's a paper that will raise a few eyebrows among academic economists - how to improve performance of journal referees. HT: MargRev.
Any self-respecting member of the economics profession (and the scientific field in general) has at some point dealt with long-lasting review processes. As you send a paper to a journal hoping to get it published it needs to go through a double-blind peer review process where it is usually sent to two referees (if it passes the editor) who decide whether the paper is good enough for a given journal and if so provide comments and suggestions for improvement. If they give you a go and suggest a few minor or major changes, the paper goes back and forth several times. These things take time, sometimes even a few years. And that can indeed be a real problem. I've heard of stories where people send their article to a journal, and in the mean time - during the three years waiting for the article to actually get published - the main assumptions they've employed have changed completely. Imagine writing about Arab dictatorships before the Arab spring. Or about the oil industry before the shale gas revolution. Certain events can completely offset one's hypothesis.
Another good example were academic papers being published even in the top journals as the crisis was unfolding in 2008, 2009 and 2010. The range of topics was so diverse it included wine economics, online advertising (?) and wikipedia bias. I recall Scott Sumner being baffled by the fact that no single paper at an AEA economic conference in 2011 was on contemporary monetary policy or crisis-related economic policy. In a time when the public was seeking answers from the economists, no answers were being provided in terms of top-notch academic research. But that's why there were blogs. And many ideas bounced around through them, so the economists did respond in a timely manner. It's the policymakers who decided to adapt a selective approach to economic policy, accepting selected signals from economists but ignoring the rest.
The experiment
The experiment
Anyway, three economists, Raj Chetty and Laszlo Sandor from Harvard and Emmanuel Saez from Berkeley, ran an experiment with journal referees at the well-respected Journal of Public Economics (JPE). They wanted to see whether they could improve the speed/quality of a peer review. In particular they wanted to test which types of economic and/or social incentives could be used to improve pro-social behavior of referees.
They ran the experiment for 20 months (2010-11) and have included 3000 referee invitations for the JPE. They (Chetty and Saez are editors of the JPE) have randomized the referees into one of the following four groups - (1) giving them a 6 week deadline (the control group), (2) giving them a 6 week deadline plus having their turnaround time posted at the journal website by the end of the year (this was the social incentive), (3) giving them a 4 week deadline, and (4) giving them a 4 week deadline plus a $100 Amazon gift card for meeting the deadline (this was the cash incentive). Those randomly selected in the first group are the control group, while the three others were treatment groups: social treatment, early deadline treatment and cash treatment. There was no punishment imposed for not reaching the deadline. Only positive incentives were given.
Source: Chetty, Saez, Sandor (2014) How can We Increase Pro-Social Behavior? |
The conclusions are very interesting (see the graph above): Short deadlines are extremely effective for increasing the turnaround speed. As the deadline was shortened by two weeks, they received reviews two weeks earlier on average, which is a great result. They also state that attention matters: reminders and deadlines are very effective and have great impacts on speeding up the process. The cash incentive (the dashed red line) is the most effective method provided that reminders are being sent just before the deadline. On the other hand the social incentive of posting one's turnaround time online (the full read line) was not much different from the control group and the regular 6 week deadline effect, however in both cases the nudge from the editor (the horizontal dashed lines) helps in speeding up the whole process. This personalized approach from the editor was particularly successful at reducing review times with tenured professors who respond less to cash and regular deadline incentives.
In terms of a faster review process they also tested for review quality, being afraid that a tighter deadline might provide a lower-quality review. But this didn't happen. Quality was tested via the average rate at which the editors follow the advice in the review report and the median number of words in the report. Both were unaffected by any of the given incentives.
The normative, 'policy' implication for journal refereeing is that editors can indeed affect the length of the review process, without affecting its quality. Even though the cash incentive proved to outperform the rest of them (which is hard to impose even though journal publishers could finance it), simple methods such as lowering the deadline coupled with a personalized reminder approach can indeed be effective. Kudos to the authors for such a great experimental design.
Comments
Post a Comment