NEJM and Lancet high-profile retractions: an academic wake-up call

The back-to-back retractions in early June, 2020 of two highly publicized COVID-19 papers in New England Journal of Medicine (NEJM) (Mehra et al. 2020a, b) and The Lancet (Mehra et al. 2020b) as a result of unreliable or non-existent Surgisphere-derived data, upon which those papers’ analyses were based, serves as an important alert to the biomedical academic community. Surgisphere is a firm that specializes in hospital and health care analytics. A thematically related SSRN preprint by two of the same authors (Amit Patel and Sapan Desai), and cited by Sharun et al. (2020), also silently disappeared.Footnote 1 Silently retracted, withdrawn or disappearing COVID-19 preprints are not being sufficiently discussed by academics, but they constitute a fundamental threat to the integrity of open access (Teixeira da Silva 2020a). On the heels of those high-profile retractions, an editorial by Wallis (2020) that allured to the creation of a COVID-19 Severity Scoring Tool based on a collaboration between SurgisphereFootnote 2 and the African Federation for Emergency Medicine (AFEM), was retracted and republished. To date (November 3, 2020), the Mehra et al. (2020a, b) papers have, according to Google Scholar (GS), accrued 591 and 701 citations, respectively. Currently, there are 19 versions of Mehra et al. (2020a) and 59 versions of Mehra et al. (2020b) shown in GS and no indication at GS that the articles have an expression of concern or that they have been retracted.Footnote 3 A Web of Science search with the title “Cardiovascular disease, drug therapy, and mortality in Covid-19” yielded three results with Mehra et al. (2020a) showing 203 citations in core collections. Many of these papers were published after the expression of concern and retraction. The expression of concern is cited 5 times, and the retraction notice has 45 citations. The Web of Science search for the Mehra et al. (2020b) paper using the title yielded two results, namely the retraction notice which is cited 173 times and the expression of concern cited 8 times. A PubMed search yielded 148 citations for Mehra et al. (2020a), 9 citations for the expression of concern and 104 citations for the retraction notice. A PubMed search also yielded 195 citations for Mehra et al. (2020b), 15 citations for the expression of concern and 82 citations for the retraction notice.

TheNEJM and The Lancet retractions, which took place within days after errata and or expressions of concern had been published, call for added awareness, reflection, and reformation. These two journals, in particular, are not traditionally associated with exploitative or predatory behavior, which are often characterized by poor or non-existent peer review and editorial oversight (Teixeira da Silva et al. 2019). The NEJM has a 2019 Clarivate Analytics journal impact factor (JIF) of 74.699 while the 2019 JIF of The Lancet is 60.392. Those three retracted papers, if used or cited, may constitute a public health risk because potentially dangerous and/or misleading information related to public health was released to the public, presenting them as clinically and academically valid studies. According to the Guardian,Footnote 4 the World Health Organization (WHO) halted trials on hydroxychloroquine (HCQ) in response to the findings of the Mehra et al. (2020b) paper, but soon after its retraction, the WHO resumed trials. Fogel (2018) listed 11 reasons why clinical trials might fail, one of them being unreliable data.

A survey byDeGruyter of scientists indicated that about 50% of respondents “have ‘no time at all’ or ‘less time’ for research and writing”.Footnote 5 This is an important issue since the peer reviewer pool that is serving as the base for the peer review of COVID-19 papers likely also stems from this same group of academics. Thus, conventional correction of the literature may be insufficient. Academics have to thus build their own defenses and strategies to certify the legitimacy of their research (Lakens 2020) because peer review, even in top-ranked journals, may be fallible. As the number of retracted COVID-19 papers increases (Abritis et al. 2020; Teixeira da Silva et al. 2020), this will allow case studies to serve as learning points for fortified ethics guidelines and policies.

Risks of select publishing practices during the pandemic

One possible reason why an academic might continue to cite unreliable literature that poses a danger to public health may be the poor indication of the paper’s retracted status: an inconspicuous small red notice on the header of the retracted NEJM paper versus a bold across-the-page “retraction” stamped on the retracted The Lancet paper. Another possibility may be the existence of unretracted copies on social media, third-party websites or the black/pirate open access site, Sci-Hub. Some papers that cited these now-retracted papers (e.g. Boulware et al. 2020) might themselves need to be corrected, while the metrics of these journals also needs to be adjusted (Teixeira da Silva and Dobránszki 2018).

Another risk may be the rise or fortification of a parallel academic publishing market(s), of low quality and/or predatory in nature, that might be exploiting COVID-19 papers for profit (extraction of article processing fees), or that authors might be exploiting for quick and easy publications (Teixeira da Silva 2020b). The trade-off between rigor in peer review and speed to publish COVID-19 research critical for public health (Fig. 1) should be less of a consideration for top journals where editors need to maintain strict publishing standards (Kun 2020; Matias-Guiu 2020; Palayew et al. 2020).

Fig. 1
figure 1

Trade-off between rigor and speed in peer review. Pressure to achieve the latter may result in a compromise of the former. This phenomenon, which has become acute in the COVID-19 era, has particularly serious reputational consequences for high-ranking journals

Some editors noted a negative impact on the acceptance of peers to review papers, citing peers to be overworked or preoccupied as reasons to decline the peer review of papers (Toth 2020). Others called on editors to curtail requests for additional experiments, and to shorten the revision period, suggesting that some peer review might be rushed and that some results might be too provisional or superficial, potentially lowering academic standards rather than upholding them (Eisen et al. 2020; Horbach 2020). There are also risks of badly written, superficial and inaccurate systematic reviews (Yu et al. 2020).

There is also risk that other important health issues are given less priority. Researchers focusing on other important contagious diseases, such as Ebola or influenza, or aspects that they might perceive as essential to humanity, such as climate change, HIV/AIDS, cancer research, malnutrition, or others, might view COVID-19-related research as currently receiving preferential treatment, i.e., a crowding-out effect, or pandemic research exceptionalism (London and Kimmelman 2020). Barakat et al. (2020) found the median time to publish COVID-related papers was eight times faster than the control group of other non-COVID-related papers in the previous year.

Early data (January 1 to June 30, 2020) from Clarivate Analytics’ Web of Science and Elsevier’s Scopus indicates that less than 50% of papers published on COVID-19 were original research papers, the majority of the remaining literature being documents such as editorials, letters or perspectives, while a small fraction (approximately 0.5–0.8%) were errata and/or retractions (Teixeira da Silva et al. 2020). According to Di Girolamo and Reynders (2020), the majority of COVID-19 published papers are a secondary type of articles (i.e., editorials, opinions, letters) compared to the H1N1 pandemic, recommending that action be taken to flatten the curve of such articles in order not to dilute important new knowledge about the disease.

The analysis byTeixeira da Silva et al. (2020) also revealed that not all COVID-19 papers and their data sets are OA, even though they should be. This is because several publishers pledged to make all work related to COVID-19 OA, in response to a call by National Science and Technology Advisors from 12 countries on March 13, 2020,Footnote 6 signing the Wellcome consensus statement, in a bid to fortify the open science and open data (OD) movements, and as a noble gesture of aiding humanity and sustaining the integrity of medical science. An analysis of COVID-19-related research that should be OA, but is not, in contravention of that agreement, is needed. Furthermore, researchers working on other important diseases might feel that their work also deserves OA status, similar to COVID-19-related research, as well as fair peer review and editorial handling.Footnote 7

Separately, and unrelated to the retracted Mehra et al. (2020a, b) papers, Zhuang et al. (2020) was withdrawn (i.e., retracted) due to public criticism, and the public file was deleted. Despite this, the paper and its abstract are still listed at ResearchGate,Footnote 8 but the paper is not indicated as retracted, thus inviting academics to cite this paper, which has already accrued 28 citations according to GS. A paper by Funck-Brentano and Salem (2020) that cited Mehra et al. (2020b) was retracted and subsequently republished as Funck-Brentano et al. (2020), and while its ResearchGate page shows a “retracted” label, the PDF file is still of the unretracted paper.Footnote 9 Incidentally, the Funck-Brentano and Salem (2020) paper, which was retracted a few days after publication, had been cited by the coordinator of the White House coronavirus task force to support a decision against importing COVID-19 tests.Footnote 10 The Funck-Brentano and Salem (2020) paper has accrued 29 citations according to GS, despite its retracted status. The use, reliance on, or citation of COVID-19 literature that has been badly reviewed, non-reviewed (preprints), or retracted, may pose a public health risk, because misinformation and incorrect perceptions may be perpetuated (Ioannidis 2020; Jacobsen and Vraga 2020). The risk is fortified in the health sector where practitioners are pressed for time, under-resourced, and where advice can spread by word-of-mouth (Martin 2017).

Measures needed to minimize risks associated with COVID-19 literature and avoid retractions

COVID-19 continues to affect lives and lifestyles across the globe in deadly and tangible ways. Although rapid retractions and insightful retraction notices are a good start in correcting erroneous COVID-19 literature, greater structural reform, fortified peer review and stricter editorial handling are needed for full and thorough accountability.

Fortification of the publishing process moving forward is needed. Trust and transparency need to increase and would benefit if the datasets, peer review reports, editorial comments and decisions, and authors’ responses, to all COVID-19 papers were released publicly, i.e., an OD policy (Shuja et al. 2020).Footnote 11

Rigorous peer review is needed as a safeguard for reducing the risk of publishing research which does not provide new knowledge, or that has flaws (Drummond 2016). Peers without biases and self-interest should be selected to review and accept research that adds new knowledge and builds on existing literature, including replication studies (Csiszar 2016; Mavrogenis et al. 2020). Existing knowledge is assumed as the status quo, unless disputed by new evidence. Scientists should take a conservative approach when reviewing a paper and should tentatively assume, before reviewing, that the manuscript does not contribute to new knowledge, unless the evidence provided is sufficiently robust to disprove this assumption. In such a case, a false null hypothesis is rejected and manuscript is published. However, there is still a chance that an unsound paper is published since a type I error might occur (Ioannidis 2005). The peer recommends acceptance of a flawed manuscript. However, there is also a type II error, which is to not reject the null hypothesis that a paper does not contribute to new knowledge when actually it does. In this case, the peer recommends rejection of a good (i.e., scientifically sound) manuscript. This happens especially in journals where the acceptance rate is very low (Björk 2019). Peer review is supposed to reduce type I errors (i.e., by reducing the chances of accepting manuscripts that are flawed). Unfortunately, this increases the chances of a type II error, namely the likelihood that a good paper is not published. There is thus a trade-off between the likelihood of accepting an unsound manuscript and the likelihood of rejecting a good manuscript. Minimizing the likelihood of a type I error will lead to very few papers being accepted for publication but this also results in numerous good papers being rejected for publication (Heckman and Moktan 2020). In order to reduce the risk of a type I error, Oller and Shaw (2020) advocated for research related to vaccine safety and analysis to be more rigorous, unbiased and independent. At the Medical Journal of Australia (MJA), select COVID-19 papers of an urgent nature are offered the possibility of an automatic preprint option, but the precise criteria for inclusion or exclusion of this exclusive publishing model are not clearly defined, nor is it clear what advantage there is in offering a preprint option that preprint servers such as bioRxiv or medRxiv do not already offer (Tally 2020).

When making a decision on the rigor of the peer review process, consideration should be given to the costs of a type I error versus that of a type II error as well as adopting a precautionary principle. When the cost of a type I error is high, as it may well be with COVID-19 research on public health, peer review measures should be tightened (Ioannidis 2020). The measures which we recommend below from the date of submission to publication for original research papers are rigorous and would take at least a month or two in order to minimize the risks of making such an error and publishing flawed research and highly speculative knowledge on public health (Bauchner 2017; Bauchner et al. 2020). The trade-off and thus sacrifice would be that some good quality research papers on COVID-19 may be rejected. For editorials, letters, and perspectives, the review process should take no more than two weeks by the editors, i.e., desk rejections should be quick (Teixeira da Silva et al. 2018), and some manuscripts may not require peer reviews as these manuscripts are mostly opinions and hence contain normative statements which can be debated by scholars.

We believe that the following six-step processFootnote 12 would minimize risks of publishing questionable original research on public health research related to COVID-19:

  1. 1.

    Prescreening a paper by the editor-in-chief with simultaneous advice from an expert editor and a statistician, as suggested by the MJA (Tally 2020), within a week. Where analyses are found to be robust, the paper can be preprinted on a journal’s short-list of preprint servers, if the authors so wish. An OD policy should be mandatory.

  2. 2.

    Independent peer review by at least three experts selected by the journal’s editors within three weeks, including full access to raw data. Absent the presentation of raw data for underlying analyses if requested by reviewers, authors should be subject to an ethics investigation.

  3. 3.

    Ideally, an open peer review policy should be in place, but this should be mandatory for all authors, rather than optional, and thus applicable for only some.

  4. 4.

    There should be no immediate overnight acceptance of any paper on COVID-19 research, to avoid an association with predatory publishing practices. Authors would have 1–2 weeks to complete minor revisions and 3–4 weeks for major revisions, with another 1–2 weeks for each additional revision. In contrast, Eisen et al. (2020) advocate for a no-time limit on revisions. Reviewers should examine edits within a week and make a decision to reject, accept, or make additional edits.

  5. 5.

    The editorial decision should be based on reviewers’ recommendations, and acceptance should only be with the unanimous approval of all three reviewers. Doubt should be minimized to minimize risks of a type I error occurring. Unanimity will most likely ensure that no more than four retractions for every 10,000 articles, i.e., a retraction rate of 0.04%, happens (Brainard and You 2018).

  6. 6.

    Processing a manuscript to publication should take, at most, another 2 weeks, and should include open peer review reports, authors’ responses and editorial comments and decisions.

Conclusions

This paper argues that the volume of research being conducted may be placing unprecedented strain on publishers’ online submission systems, peer reviewers and editors, many of whom are struggling to deal with the sheer mass of submissions while simultaneously having to deal with the health, emotional and psychological pressures associated with the changes that this pandemic has imposed on society as a whole. With such large volumes of information available, the publishing system is somewhat overwhelmed and the selection of pertinent literature can be challenging (Brainard 2020). Handling a large volume of submissions in order to release potentially valuable information to the public, to either combat the virus, or to raise awareness, may result in a lapse in quality if there is peer review or editorial oversight (Bauchner et al. 2020; Chirico et al. 2020; Palayew et al. 2020). Authors, peer reviewers, editors, publishers, and media that cover published COVID-19-related research, as well as the public, face considerable challenges in the months ahead. The US academic research enterprise has been massively impacted, including educational and reseach disruption (Radecki and Schonfeld 2020), and this is likely a similar scenario around the globe. As COVID-19-related literature continues to growFootnote 13,Footnote 14 the inability to effectively deal with the corrections of errors may fragilize academic publishing, inducing not only a health crisis, but a publishing crisis (Bell and Green 2020). Only time will tell whether the mistakes in papers caused by these publishing weaknesses, as well as frailties in society and healthcare caused by COVID-19 in the past few months, will lead to the emergence of a more robust, or a more fragilized, publishing landscape.Footnote 15