Published April 26, 2022 | Version v1
Peer review Open

Review of A Rubric to Evaluate Preprint Peer Reviews

Description

This document provides a rubric for quantitative evaluation of the quality of a peer review report on a preprint article, and was developed as part of an undergraduate class on peer review. This is a significant addition to the sparse literature on peer review assessment, and also contributes to current discussions about the undertaking of peer review of preprints. It is a useful document for thinking about evaluating peer review quality in all contexts, and also for the design of peer review reports in a preprint context, where there is often a lack of pre-defined structure (in comparison to some journal peer review formats). Overall I am looking forward to using it myself for both writing and evaluating reviews, and am certain it will be an important contribution to literature about preprint peer review.

Major comments:

It may be important to make it explicitly clear to the reader that this is intended for evaluating (or writing) a comprehensive peer review report on all aspects of a preprint. This is in contrast to comments on a particular domain or section of a review. This is probably most important for those using this for evaluation, especially in a training context, because it will be important for instructors providing such activities to students to make clear how comprehensive the review report should be. It may be that this rubric could be made more modular e.g. evaluators may wish to evaluate specific components, such as structure in isolation; some indication of how that could be done (presumably by removing/adjusting the weighting scores?) could be helpful. Related to the weighting, it may also be helpful to describe the rationale or the process behind the scale, and the weighting percentages, or perhaps some suggestions to evaluators on how appropriate (or not) it may be to adjust weightings based on their own interests or motivations. I would also suggest moving the instructions up earlier, as the scale in particular is not necessarily intuitive and may result in varied use of the rubric, that could cause a problem in a scenario where multiple evaluators take part in evaluating the same work or group of students.

It may also make sense to articulate whether this rubric is related to particular disciplines, or written from a particular perspective e.g. biological sciences; and also whether it is geared towards experimental preprints versus, for example, white papers, or is written with a particular Western framing of scholarly writing in mind. That said, the sections of the rubric appear widely applicable to many fields, for example the methods section, including to white papers or commentary pieces that do undertake some methodology or data collection as part of the described work.

Minor comments:

Under REVIEW STYLE, one bullet suggests "Review includes a balance of positive and negative feedback". The word "balance" suggests an equal amount of positive and negative feedback, which may not always be possible. Is the intention to ensure that reviewers are pointing out both strengths and weaknesses, where possible? If so, would wording such as "Review is written with a view to discussing both strengths and weaknesses" be suitable? To my reading, it suggested I should perhaps be trying to find a flaw for every strength, and vice versa, when writing a review.

Other suggestions for revisions:

Given that the purpose of this document is to present the rubric, these comments are merely advisory: it could be interesting to see a bit more about the process of designing this rubric, with a description of the process by which the authors arrived at these categories or ideas.

It may also be useful to expand the context a little by bringing in how these relate to/address the recently-published FAST principles for preprint peer review (Iborra et al., 2022. FAST principles for preprint feedback. https://doi.org/10.31219/osf.io/9wdcq) and how this relates/differs from the PREreview assessment rubric (Foster et al., 2021. Open Reviewers Review Assessment Rubric. https://doi.org/10.5281/zenodo.5484072). However this may be more detail than the authors feel is necessary.

A minor formatting issue: in the "Review Substance" sections, it may be useful to indicate with sub-headers that each section is for a different component. Or, to keep formatting consistent with the bolded and capitalized headers in the other sections, perhaps the headers for each section could be: "REVIEW SUBSTANCE: INTRODUCTION"; "REVIEW SUBSTANCE: METHODS & RESULTS"; "REVIEW SUBSTANCE: DISCUSSION".

Another minor formatting issue, from the point of view of someone wishing to use this for evaluation: if it is possible to create a document that has the table on one sheet (or perhaps adding a supplementary file that can be downloaded and printed out for use), that would be helpful for evaluators using the document to simultaneously evaluate a review. It may also be worth creating a multiple-page document, where each section includes a space to write in comments or notes that an evaluator makes when evaluating that particular section, if the evaluator is printing the document out or wishes to record it as an editable PDF. This may be helpful to evaluators in weighing up how to determine the score after reading through the entire peer review report and making relevant notes to look over at the end.

Review Methodology and COI disclosure

This review was carried out following the structure of the rubric, as well as following the FAST principles.

Public COI disclosure: I am a consultant who engages in activity related to peer review, and am supported financially by a grant to evaluate the curriculum in which this product was developed, and will make use of this product in that context. However, I did not have input into the design of this product.

Files

Files (6.6 kB)

Additional details

Related works

Reviews
Lesson: 10.5281/zenodo.6471332 (DOI)