Proposing a manuscript peer-review checklist
Introduction
There are two growing trends that conspire to make peer reviews an increasingly demanding task. The first is the increase in the number of reviews. It has been reported that the number of scientific journals linearly increases year after year (Hook, 1999). Journal size also increases—both by the number of articles published, and the number of pages per article (Tenopir and King, 2000). While exact growth rates are subject to debate, the net cause (assuming that higher acceptance rates are not the sole justification) is that the number of submitted manuscripts has increased, and thus so did the number of reviews.
The second issue is the decrease in review time asked by Editors. The paradigm shift caused by the World Wide Web and the advent of electronic submissions will not be discussed here, other than to state that months-long reviews are a thing of the past. In order for journals to remain competitive, authors and readers demand that articles be reviewed and published faster, in part to make information circulate faster. It is a routine now for editors to ask that reviews be completed in a matter of a few weeks.
At the same time, publications reporting results from studies – clinical, methodological or otherwise – are increasingly being referred to in the context of evidence-based medicine. Thus, readers expect, and rely on, the published articles to be objective and of increasingly higher quality. Editors and reviewers alike are faced with a trade-off between decreasing review times vs. increasing article quality.
Both of these issues are very well exemplified by the current changes in editorial policy for peer review in NeuroImage, where the net effect is a call for more reviews, to be done in less time. We expect this trend to grow and include most scientific journals; when generalized, this will complicate even further the task of reviewers, called upon by multiple editors from different publications to perform at a quicker turnaround rate.
Reviewing remains an experience-driven process, not taught nor necessarily transmitted in the same fashion as field-specific knowledge (Benos et al., 2003). It is somewhat puzzling that the manuscript peer-review process, which is often hotly debated, has not received much attention or been the subject of much formalization in the literature.
Reviewing is an important but secondary task. In order to be successful in the face of increased demand, reviewers will need a net gain in productivity. Otherwise, either the quality of the reviews will diminish, or, in order to maintain an equal quality level, it will become a primary task. Both of these situations are to be avoided.
We hypothesize that if the review process is formalized, review quality and reviewer productivity will increase. To address this situation, we suggest a tool to improve reviewer's productivity, the manuscript peer-review checklist. Thought of as a guide, it is not meant to be a categorical tool to arrive at a deterministic assessment of the quality of a manuscript, but rather, as an aide-memoire to help reviewers in their task.
The goal of this article is to present this checklist. For the present purposes, we have chosen to limit ourselves to the domain of medical imaging, even though these issues are not solely confined to this field. It would be difficult at this point to synthesize a list of questions that could fit all scientific journals. In fact, it must be stated that this article is intended as a presentation of the checklist, and not a report on its use. The research goal is to gauge interest in the medical imaging scientific community as to the usefulness and appropriateness of the tool in the review process, before extending further to research on its efficiency.
Section snippets
Checklist elaboration
The elaboration of the checklist followed a three part process: (a) compilation of checklist items, based on numerous sources; (b) structuration of checklist elements; and (c) assessment through the paradigm of Verification, Validation and Evaluation (VVE).
Checklist compilation
The checklist has been compiled using (a) existing checklists, such as the Consolidated Standards of Reporting Trials (CONSORT) (Moher et al., 2001) (22 criteria), the Standards for Reporting of Diagnostic Accuracy (STARD) (Bossuyt et al.,
Discussion
We have proposed a 71 criteria checklist to be used in the peer review of a submitted manuscript to a medical imaging journal. Our primary goal is one of qualitative increase in the productivity and quality of result of the reviews.
The checklist should be viewed as a series of guidelines; an aide-memoire for reviewers. It is not meant as a mean of standardizing output: the latter still needs to be personalized and tailored to the individual manuscript.
When generating the checklist, we have
Authors' contributions:
- •
Guarantors of integrity of entire study, all authors;
- •
Study concepts and design, all authors;
- •
Literature research, S.D.;
- •
Methods, analysis and interpretation, all authors;
- •
Manuscript preparation, S.D.; revision/review, all authors; and
- •
Manuscript definition of intellectual content, editing, and final version approval: all authors.
There were no medical writers involved in the creation of this manuscript.
Acknowledgments
We thank Dr K. Friston., editor of NeuroImage, for his input in the final version of this manuscript. S.D. and P.J. acknowledge the support of the Fond pour la Recherche en Santé du Québec and the Institut National de Santé et Recherche en Médecine.
Role of the funding sources: The funding sources had no involvement in study design, collection, analysis and interpretation of data, writing of the report and in the decision to submit the paper for publication.
Disclosures: No disclosures
References (12)
- et al.
Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Quality of Reporting of Meta-analyses
Lancet
(1999) - et al.
The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials
Lancet
(2001) Verification, validation and certification of modeling and simulation applications
- et al.
How to review a paper
Adv. Physiol. Educ.
(2003) - et al.
Standards for reporting of diagnostic accuracy. Toward complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Standards for Reporting of Diagnostic Accuracy
Radiology
(2003) Scientific communications: history, electronic journals and impact factors
Scand. J. Rehabil. Med.
(1999)
Cited by (10)
Codifying Systematic Manuscript Preparation Checklists as a Training and Productivity Resource for Research Students
2022, IEEE Global Engineering Education Conference, EDUCONMapping the Landscape of Peer Review in Computing Education Research
2020, Annual Conference on Innovation and Technology in Computer Science Education, ITiCSEMore Than a Tick Box: Medical Checklist Development, Design, and Use
2018, Anesthesia and AnalgesiaChecklists for quality improvement and evaluation in behavioral health
2016, Quality Improvement in Behavioral HealthA Core-Item Reviewer Evaluation (CoRE) System for Manuscript Peer Review
2014, Accountability in Research