Highlights

Anomalous activity identified among a relatively small but extremely active set of editors.

- 

Bias in manuscript handling time is strongest among the 10 most-active editors.

- 

Case study of anomalous editors reveals perverse incentives oriented around self-citation.

- 

Megajournals should list handling editor in manuscript byline for transparency.

- 

Editorial boards with active researchers should be multi-tiered and have activity quotas.

Abstract

Since their emergence just a decade ago, nearly 2% of scientific research is now published by megajournals, representing a major industrial shift in the production of knowledge. Such high-throughput production stresses several aspects of the publication process, including the editorial oversight of peer-review. As the largest megajournal, PLOS ONE has relied on a single-tier editorial board comprised of ∼7000 active academics, who thereby face conflicts of interest relating to their dual roles as both producers and gatekeepers of peer-reviewed literature. While such conflicts of interest are also a factor for editorial boards of smaller journals, little is known about how the scalability of megajournals may introduce perverse incentives for editorial service. To address this issue, we analyzed the activity of PLOS ONE editors over the journal's inaugural decade (2006–2015) and find highly variable activity levels. We then leverage this variation to model how editorial bias in the manuscript decision process relates to two editor-specific factors: repeated editor-author interactions and shifts in the rates of citations directed at editors – a form of citation remuneration that is analogue to self-citation. Our results indicate significantly stronger manuscript bias among a relatively small number of extremely active editors, who also feature relatively high self-citation rates coincident in the manuscripts they handle. These anomalous activity patterns are consistent with the perverse incentives and the temptations they offer at scale, which is theoretically grounded in the “slippery-slope” evolution of apathy and misconduct in power-driven environments. By applying quantitative evaluation to the gatekeepers of scientific knowledge, we shed light on various ethics issues crucial to science policy – in particular, calling for more transparent and structured management of editor activity in megajournals that rely on active academics.

1. Introduction

The emergence and rapid growth of megajournals1 in the last decade represents a drastic industrial paradigm shift in the production of scientific knowledge (Binfield, 2013; Bjork, 2015; Pan, Petersen, Pammolli, & Fortunato, 2018; Petersen, Pan, Pammolli, & Fortunato, 2019; Solomon, 2014; Solomon & Bjork, 2012; Wakeling et al., 2016). This transition places pressure on several fundamental aspects of the scientific endeavor. First, the personnel resources required to referee the 60,000+ megajournal articles each year is quite substantial (Binfield, 2013). Second, such publication volume also stresses the cognitive and technological capacity of individual scientists in their ability to search, retrieve, and organize the research literature. By way of example, over its first 6 years, PLOS ONE grew at an annual rate of 58%, roughly 18 times larger than the net growth rate of scientific publication over the last half-century (Petersen, Pan, et al., 2019). Consider for example 2012, in which the 23,468 articles published by PLOS ONE alone represented approximately 1 in every 1000 science publications indexed by the Web of Science (Pan et al., 2018). And third, megajournals rely on a highly scalable model for managing the scientific publication process. In particular, PLOS ONE relies on thousands of acting scientists comprising its editorial board, who simultaneously continue their role as research leaders. The dichotomy of being both a producer and gatekeeper of knowledge, which is also common to other journal editorial boards, brings forth the conditions for conflict of interest – as scientists must balance conflicting incentives arising from their duties as both authors and editors.

Oversight of the editorial board is a formidable challenge in megajournals, calling for strategic management policies to document editor activities and address unintended incentives. However, our fundamental understanding of this problem is hindered by the lack of transparency – both during the review process and also post-publication.2 Ironically, while this lack of transparency – e.g. blinding of authors, reviewers, and editors – is justified as facilitating unbiased peer review, it makes external monitoring of operational misconduct difficult. This tradeoff is an important consideration for the management of the scientific practice, not least because research shows that misconduct can arise organically from the basic pursuit of internal (and external) power (Malhotra & Gino, 2011) and the innate difficulty of avoiding temptation in decision-heavy endeavors (Gino, Schweitzer, Mead, & Ariely, 2011) – conditions characteristic of science.

For these reasons, the oversight of editor activity is necessary in order to address social and cognitive biases that can enter into the manuscript decision process. For example, the multi-disciplinary journal Proceedings of the National Academy of Sciences (PNAS) has a two-tiered editorial board system in which National Academy of Science members serve as editors of individual submissions, and a smaller rotating Editorial Board provides an additional oversight layer for approving final decisions. Similarly, Management Science also employs a two-tier system comprised of a rotating Editor-in-Chief which serves above a second layer of “Department Editors” who oversee the review and manuscript decision process for individual submissions. There are plenty of additional examples of journals that also employ multiple-tiered editorial boards, e.g. comprised of principal editors, associate editors, and advisors. Compared with the volume of megajournals, the relatively small size of traditionally print journals limits the net activity of any given editor.

Contrariwise, megajournals have developed around principles of scalability. Consequently, megajournal editors also have the opportunity to scale their decision-making power beyond the levels that are available through editorial board service in smaller journals. Given the recency of this paradigm, little is known about the variation in megajournal editor activity, the upper limits of extreme activity, and the degree to which perverse self-citation incentives may explain such extreme activity. Here we address this knowledge gap by performing an in-depth analysis of the largest journal in the world, PLOS ONE, over its first 10-years of publishing (2006–2015). Not surprisingly, this journal also has the largest distributed editorial board of any megajournal. Another distinction, one that is common to the entire family of PLOS journals and commendable for its leadership in supporting open and transparent science, is the explicit reporting of the particular handling editor associated with each published article. Thus, by combining handling editor, manuscript, author and post-publication citation data for each article, we constructed a large multi-variable dataset centered around the 6934 PLOS ONE editors. The longitudinal nature of this dataset facilitates identifying the role of social factors underlying editor manuscript decisions, thereby providing insight into a domain of science that has traditionally been poorly documented, since most journals do not reveal editor-article associations. As such, we contribute to recent efforts aimed at measuring biases in the editor manuscript decision process (Bravo, Farjam, Moreno, Birukou, & Squazzoni, 2018; Card & DellaVigna, 2017; Colussi, 2018; Sarigöl, Garcia, Scholtes, & Schweitzer, 2017).

The results of our analysis indicate that a relatively small set of PLOS ONE editors are exceeding reasonable activity levels. By way of comparison, we observe 85 PLOS ONE editors (or 1% of all editors) with activity levels exceeding the most active PNAS editor (see Fig. 1). Moreover, we identify 10 extremely active editors (denoted collectively by XE) who exhibit significant differences when compared to the remaining PLOS ONE editors. To be specific, articles accepted by XE are accepted significantly faster, have higher rates of citations to the editors’ research, and lower citation impact relative to the other PLOS ONE research articles.

Fig. 1. The distribution of editor activity in three multi-disciplinary journals. (A) Comparison of distributions of annual activity nE (per editor per year) for three journals with distributed editorial boards comprised of acting academics. This complementary cumulative distribution, CCDF(≥ nE), plots the fraction of all editors that oversee nE or more articles per year, showing that there is a small but significant subset of PLOS ONE editors (1%) exhibiting extremely high activity levels. (B) Comparison of P(SE), calculated as kernel density estimates, of normalized annual activity SE (per editor per year); SE measures the annual activity of a given editor relative to the median activity across all journal editors in a given year, thus better accounting for variation in journal size. A small but significant subset of extremely active editors are distinguished by comparison of right tails of the distributions. Vertical dashed lines indicate the activity level corresponding to the top-5% of all editors for a specific journal.

Download : Download high-res image (194KB) Download : Download full-size image

As such, after we document the descriptive statistics on editor activity levels, we then shift the focus to the outliers in the editor activity distribution, and identify anomalous statistical differences in manuscript handling outcomes between XE and the remaining editors. Our results provide complementary lines of evidence supporting an underlying self-citation strategy. It is indeed challenging to determine with absolute certainty whether these differences point to either apathy or misconduct on the part of XE, which is beyond the scope of our analysis. Independent of the source cause of the identified editor manuscript handling bias, another issue we bring to light is the need for better editorial board oversight in order to address editorial apathy and/or misconduct, which may otherwise persist unchecked in a large single-tier editorial board system. Thus, in addition to standard editor policies (Editorial Policy Committee, 2012), our results lend support for a two-tiered editorial board system model, one that places increased emphasis on the accountability of manuscript-level editors. This is a relevant publication norm to address because it is quite possible that such anomalous editor activity is more widespread in science than currently appreciated, and would be evident for other journals if their editor data were also available.3 It is also a timely issue to address in light of the relatively rapid growth of the megajournal ecosystem (Petersen, Pan, et al., 2019).

Addressing underlying sources of the issue may be as simple as implementing editor activity quotas. Also, journals with distributed boards that do not specify the manuscript handling editor in the article metadata byline should strongly consider following the example of PLOS journals in making this editor handling information readily available. Taking this idea further, it would also benefit science to make all prior reviewer reports openly available after a certain number of years in order to support a more rigorous assessment of the peer-review system. Steps in this direction will increase the transparency of the publication process, facilitate trust in the peer review process, and foster the development of data sources, methods and protocols for rigorously assessing the integrity of the scientific peer review system (Editorial Policy Committee, 2012).

In what follows, we provide background and motivation for our study in Section 2, and specify our data sources and statistical measures in Section 3. We present descriptive results in Sections 4.1, 4.2, where we characterize the extremely skewed activity levels and manuscript decision times at PLOS ONE. By contrasting editor activity levels at PLOS ONE with PNAS and Management Science, two additional journals that also employ distributed editorial boards comprised of academic editors, we argue that the activity levels among the most prolific PLOS ONE editors is anomalous – thereby meriting further in-depth analysis.

As such, we continue in Sections 4.3, 4.4 to explore how and why an editor might take on extreme activity levels for strategic gain. In Section 4.3 we investigate the how by using longitudinal panel regression to show that when editors handle articles by the same set of returning repeat authors, they accept these particular manuscripts significantly faster than if the authors are first-timers; we then show that this pattern is even stronger among the 10 extremely active editors. Similarly, we also show that manuscript handling times are faster if the manuscript authors include references that cite the editor's research, an effect that is also larger in magnitude among XE.

In Section 4.4 we address the why by measuring the degree to which preferential treatment of repeat authors is reciprocated through citations directed back at the editor's research publications in the form of citation renumeration, providing substantial evidence for editor-author “backscratching”. According to our analysis, we estimate a lower limit of 100s of citations that XE could reap by aggressively scaling up manuscript handling activity.

In Section 4.5 we pursue these lines of evidence further by focusing on three anomalous editors who are identified as simultaneous outliers in various categories, including their net activity and the frequency of citations to their work in manuscripts they accept. Using the complete career publication records for these active researchers, we show that anomalous citation rates occur not just among the PLOS ONE articles they accept, but that self-citation rates by the editors within their own published research also exceeds their baseline citation rate in the literature.

We summarize our results in Section 5, which concludes with straightforward and feasible policy recommendations pertaining to megajournal management and editorial board oversight.