2015-08-03

Highly visible changes in scholarly publishing over the last twenty years include the emergence of preprint servers such as ArXiv, and the rapid growth of Open Access publishing. However, other equally important changes, many linked to the rise of Open Access, have received less publicity.

Richard Walker*1,2 and Pascal Rocha da Silva2  here summarize their recent study “Emerging trends in peer review — a survey” in Frontiers in Neuroscience.

1 Blue Brain Project, EPFL ENT CBS BBP / HBP, Biotech Campus, Chemin des Mines 9, CH – 1211 Geneva, Switzerland

2 Frontiers, EPFL Innovation Park, Building I, CH – 1015 Lausanne, Switzerland

*Correspondence: richard.walker@epfl.ch



Traditional peer review & the onset of the “silent revolution”

Until the 1990s, the huge majority of scientific papers were reviewed by a small number of anonymous, unpaid reviewers, who judged their scientific quality and importance, suggested revisions, and guided the editor’s publication decision. This was “classical peer review”. But today, classical peer review is giving way to new forms of review and reviewless publishing (e.g. through pre-print servers). In a recently published study in Frontiers in Neuroscience (Walker and Rocha da Silva, 2015), we systematically surveyed new forms of peer review, in a sample of 82 journals, publishers and other channels of scholarly communication. The results point to a revolution in the way scientific papers are reviewed that will inevitably affect who and what is published, who wins grants and tenure, and ultimately, the way science is done. Yet it is a silent revolution: many scientists are unaware of the transformation that is underway, and little is known about its impact.

Innovation in peer review has been driven by criticism of “traditional” peer review, and by the new business models introduced by Open Access publishing. It is widely accepted that classical peer review leads to delays in publication: successive rounds of review and revision typically take many months, and authors often submit to several journals before their paper is accepted. In fact, even papers that go on to win Nobel prizes are often rejected multiple times, before publication (Campanario, 2009). The fact that reviewers are anonymous, and review reports are not published provides opportunities for editors and reviewers to favor or impede publication of particular papers (Kriegeskorte, 2012). Cases of serious error and fraud go undetected. Perhaps worst of all, most papers are reviewed by just two or three reviewers, who agree on recommendations to reject or accept/revise at levels barely beyond chance (Kravitz, Franks et al., 2010). Mathematical modeling suggests that the resulting editorial decisions are little better than a lottery (Herron, 2012). Innovations in peer review attempt to address these issues.

Non-selective review

One of the most important has been the introduction of so-called “non-selective” or “impact neutral” review. Traditional print journals, with their strong focus on impact factors, limited page budgets and high marginal costs, use selective peer review to “pick” research papers likely to attract a large number of citations, often rejecting as many as 80-90% of submitted manuscripts. Online Open Access publishers, with their ability to publish a much broader range of papers have adopted a more experimental approach, reflected in new peer review policies. In 2006, PLOS ONE stated that the journal would use peer review only to determine “whether a paper is technically sound and worthy of inclusion in the published scientific record”.

When Frontiers was launched, in 2007, it adopted a similar criterion, mandating its editors and reviewers “to focus only on objective criteria evaluating the soundness of the study and to ensure that the results are valid, the analysis is flawless and the quality as high as possible”. Subsequently “impact-neutral”review was taken on board by many other Open Access journals and publishers, including F1000 Research, GigaScience, BMJ Open, PeerJ and ScienceOpen Research. Meanwhile, publishers such as the Hindawi Group, Biology Direct and the BioMed Central series series seem to have adopted non-selective review informally, without incorporating it in their review guidelines. In our sample, only 11 out of 56 journals and publishers that adopted some form of formal review, were “non-selective”. Nonetheless, in 2014, they collectively published more than 96,000 papers, four times more than all the “selective” channels put together. Although these figures do not represent the actual proportion of channels that have adopted non-selective review procedures, they are a clear indication of its growing importance. It should also be noted, however, that nearly all journals that have adopted non-selective review are in the life sciences. Most journals in other disciplines continue to use selective review.

Preprint servers

A second major development has been the rise of preprint servers. ArXiv, which pioneered this trend, and which in 2015 published its 1,000,000th paper, was originally conceived as an online repository for preprints of articles whose authors would subsequently submit them to print journals. Today, it has become a publication channel in its own right, hosting high-quality papers that are never published elsewhere. After a rapid “access review” to check authors’ credentials, papers are published immediately, eliminating the delays and repeat submissions associated with peer review. Authors and readers are satisfied. But like non-selective review, preprint servers seem to have limited interdisciplinary appeal: most papers hosted by ArXiv come from mathematics, physics, and computer science. Attempts to attract papers from other disciplines or to establish independent repositories for these disciplines have had only limited success. In 2014, ArXiv hosted just 9,466 papers in the life sciences – 1% of the total. BioRxiv – which specializes in biology – hosted 9,653; Cogprints – an online repository for the cognitive sciences contained 3,929. Attempts to establish cross-disciplinary preprint servers by commercial publishers have also been unsuccessful. The largest, Nature Precedings (founded in 2006), ceased to accept new publications in 2012.

Open review

A third important development is the gradual shift from anonymous to “open review”, in which reviewers’ names and reports are revealed to authors and – in most cases – published, and which often provides for interaction among reviewers and authors. Journals and publishers adopting open review include the American Journal of Bioethics, the BioMed Central series, The BMJ, eLife, EMBO Journal, the Frontiers series, PeerJ, and Philica. In Frontiers, for example, reviewers of rejected papers maintain their anonymity – defusing the argument that open review may inhibit open expression of critical views. Of the 56 channels in our study that used formal peer review, 34 revealed reviewer names at some stage in the review process and a further 6 allowed reviewers to choose whether or not to reveal their names.

In parallel with the move towards open review, other publishers have shifted from “single blind” to “double blind” procedures in which reviewers do not see authors’ names. In the humanities and social sciences double blind review has always been standard, but outside these disciplines it is still highly unusual. Nonetheless, in the last two years, a small number of journals in the “hard” sciences (e.g. Behavioral Ecology, IEEE Communications Letters, Nature Climate Change, Nature Genetics) began to adopt the practice. In February 2015, Nature Publishing Group announced that it too would offer a double blind option to its authors. It has been claimed that this possibility may reduce gender bias and bias against authors from low ranking institutions.

Other innovations in review

Other innovations have had a smaller impact. One example is post-publication review. Channels that adopt this system publish all submitted papers that pass an initial access review but decide their status as official publications on the basis of a formal peer review that takes place after publication. This system, first introduced in 1997, by Electronic Transactions in Artificial Intelligence, was later adopted by Atmospheric Chemistry and Physics, and sister journals from Copernicus Publications, several of which have achieved high rankings. Recently, other publishers (F1000Research, Semantic Web Journal, Philica) have adopted similar policies. However, in 2014, channels using post-publication review published about 8,400 papers, just 7.5% of the number that reviewed papers before publication (112,000).

Similar considerations apply to attempts to open the review process to relevant scientific communities, as proposed in Kriegeskorte (2012). Two journals in our sample — Electronic Transactions in Artificial Intelligence  and the Semantic Web Journal — combine community review with classical review and two others – Nature and the Shakespeare Quarterly – have experimented with review systems in which any reader is free to contribute a review. More significant, in terms of quantitative impact, has been the tiering system introduced by Frontiers, which implicitly introduces user evaluation into the review process, inviting the authors of the 10% of Frontiers research papers (“tier 1″ articles) with the most views and downloads, to transform their paper into a “Focused Review”, for a more general academic audience. To date, however, Frontiers is the only publisher to use such a system.

Finally, we should mention the increasing importance of informal channels of communication, such as Twitter, the blogosphere, and those such as F1000 Prime, PubMed Commons, Pubpeer, Mendeley, and ResearchGate that offer dedicated reader review services. In several prominent cases (e.g. claims about cells that use arsenic in the place of phosphorous, a purported proof that P≠NP and the so called STAP technique for the generation of pluripotent stem cells) , user comments on Pubpeer, ResearchGate, blogs and Twitter have played an important role in raising doubts about high profile scientific claims that later turned out to be unfounded. However, critics point out that this kind of commentary is only relevant for papers that attract extraordinary public attention (“the 1% of scientific papers”) (Fox, 2014) and frequent attempts by journals to attract reader commentary on less prominent papers have been largely unsuccessful. This suggests that informal reader commentary, while important in special circumstances, has only a limited role to play in the “normal” scientific process.

Summary & looking ahead

If we summarize the results of our survey, what we see is a process of rapid innovation and diversification, in which a monolithic, “one size fits all” system of peer review is gradually replaced by a range of different review mechanisms, many of which are only used within specific disciplines. Nearly all these changes (e.g. non-selective review, open review) have been pioneered by Open Access publishers, while publishers with traditional business models have been slower to adopt the new practices.

What are the effects? Supporters of classical peer review argue that it is the “lynchpin” of the scientific process, and that funding agencies, selection committees and readers require the signal of quality it provides. Supporters of new models of peer review counter that it is a non-transparent and arbitrary process that is structurally incapable of providing such a signal; point to the extremely poor correlation between review scores and numbers of subsequent citations; and argue that the key metric for the success of scientific publishing should be the ease and timeliness with which valid scientific results are shared within the scientific community.

These are issues of policy that must necessarily remain open. But tackling them requires an evidence base, and today this is still missing. What are the effects of new systems of peer review and reviewless publishing on the timeliness of scientific publishing and the quality of the papers that are published? Our experience at Frontiers is that impact-neutral, collaborative review allows our journals to achieve high impact factors, while publishing large numbers of high-quality papers, facilitating the publication of papers that would be hard to publish in traditional journals and drastically reducing delays. But we need systemic comparative studies to substantiate these claims. It is also plausible that non-classical review can help reduce bias against female authors, authors from developing countries, authors from low-prestige institutions and authors whose native language is not English; but a recent study of our own, in which we looked at review results from Frontiers and from computer science conferences that use classical peer review, found little evidence of bias in either. Thus, no sign of superiority of one over the other (Walker, Beatriz et al., 2015).

Rigorous quantitative studies of the impact of new forms of review have been rare and non-conclusive. As a result, most of the arguments for and against have taken place in the blogosphere or through editorials and “opinion pieces”. This is not good enough. If the impact of peer review is as large as we suspect, the time is ripe for it to become an object of scientific study in its own right.

REFERENCES

Campanario, J. M. (2009). Rejecting and resisting Nobel class discoveries: accounts by Nobel Laureates. Scientometrics 81 (2): 549-565. doi: 10.1007/s11192-008-2141-5.

Fox, J. (2014). Post-publication review is here to stay–for the scientific 1%. Dynamic Ecology. Accessed February 13 2014.

Herron, D. M. (2012). Is expert peer review obsolete? A model suggests that post-publication reader review may exceed the accuracy of traditional peer review. Surgical Endoscopy 26 (8): 2275-2280. doi: 10.1007/s00464-012-2171-1.

Kravitz, R. L., P. Franks, M. D. Feldman, M. Gerrity, C. Byrne and W. M. Tierney (2010). Editorial peer reviewers’ recommendations at a general medical journal: are they reliable and do editors care? PLOS One 5 (4): e10072. doi: 10.2307/255467.

Kriegeskorte, N. (2012). Open evaluation: a vision for entirely transparent post-publication peer review and rating for science. Frontiers in Computational Neuroscience 6 (79). doi: 10.3389/fncom.2012.00079

Walker, R., B. Beatriz, R. Conejo, K. Neumann and M. Telefont (2015). Bias in peer review: a case study; ref status: approved with reservations 2. F1000Research 4 (21). doi: 10.12688/f1000research.6012.1.

Walker, R. and P. Rocha da Silva (2015). Emerging trends in peer review—a survey. Frontiers in Neuroscience 9 (169). doi: 10.3389/fnins.2015.00169.

Show more