Intended for healthcare professionals

Research Methods & Reporting

PRISMA harms checklist: improving harms reporting in systematic reviews

BMJ 2016; 352 doi: https://doi.org/10.1136/bmj.i157 (Published 01 February 2016) Cite this as: BMJ 2016;352:i157

This article has a correction. Please see:

  1. Liliane Zorzela, clinical assistant professor1,
  2. Yoon K Loke, professor2,
  3. John P Ioannidis, professor3,
  4. Su Golder, research fellow4,
  5. Pasqualina Santaguida, assistant professor5,
  6. Douglas G Altman, professor6,
  7. David Moher, senior scientist7,
  8. Sunita Vohra, professor1,
  9. PRISMA harms group
  1. 1Department of Pediatrics, Faculty of Medicine and Dentistry, and Integrative Health Institute, University of Alberta, Edmonton, T6G 2C8, AB, Canada
  2. 2Norwich Medical School, Norwich, UK
  3. 3Stanford University School of Medicine, Stanford University School of Humanities and Sciences, Stanford, CA, USA
  4. 4Department of Health Sciences, University of York, Heslington, UK
  5. 5Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, ON, Canada
  6. 6Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology, and Musculoskeletal Sciences, Botnar Research Centre, University of Oxford, Oxford, UK
  7. 7Ottawa Methods Centre, Ottawa Hospital Research Institute, Ottawa Hospital, Ottawa, ON, Canada
  1. Correspondence to: S Vohra svohra{at}ualberta.ca
  • Accepted 11 December 2015

Abstract

Introduction For any health intervention, accurate knowledge of both benefits and harms is needed. Systematic reviews often compound poor reporting of harms in primary studies by failing to report harms or doing so inadequately. While the PRISMA statement (Preferred Reporting Items for Systematic reviews and Meta-Analyses) helps systematic review authors ensure complete and transparent reporting, it is focused mainly on efficacy. Thus, a PRISMA harms checklist has been developed to improve harms reporting in systematic reviews, promoting a more balanced assessment of benefits and harms.

Methods A development strategy, endorsed by the EQUATOR Network and existing reporting guidelines (including the PRISMA statement, PRISMA for abstracts, and PRISMA for protocols), was used. After the development of a draft checklist of items, a modified Delphi process was initiated. The Delphi consisted of three rounds of electronic feedback followed by an in-person meeting.

Results The PRISMA harms checklist contains four essential reporting elements to be added to the original PRISMA statement to improve harms reporting in reviews. These are reported in the title (“Specifically mention ‘harms’ or other related terms, or the harm of interest in the review”), synthesis of results (“Specify how zero events were handled, if relevant”), study characteristics (“Define each harm addressed, how it was ascertained (eg, patient report, active search), and over what time period”), and synthesis of results (“Describe any assessment of possible causality”). Additional guidance regarding existing PRISMA items was developed to demonstrate relevance when synthesising information about harms.

Conclusion The PRISMA harms checklist identifies a minimal set of items to be reported when reviewing adverse events. This guideline extension is intended to improve harms reporting in systematic reviews, whether harms are a primary or secondary outcome.

Introduction

Evidence based healthcare practice seeks the best, unbiased evidence to make appropriate decisions to improve patient outcomes. For optimal healthcare decision making, accurate knowledge of both benefits and harms is needed.

Even well designed, conducted, and reported randomised controlled trials can provide an incomplete and potentially biased assessment of an intervention when efficacy results are emphasised and harms are inadequately reported.1 2 3 4 5 6 7 8 9 10 A misconception may be perpetuated that a given intervention is safe, when its safety (that is, absence of harm) is actually uncertain.4 5 6 7 8 9 10

Systematic reviews often compound poor reporting of harms in primary studies by failing to report harms or doing so inadequately.11 12 13 14 15 16 17 18 19 20 There are guidelines for reporting systematic reviews1 21 22 but they focus on how best to report treatment benefits. Systematic reviews can be misleading if they do not represent properly the true risk-to-benefit assessment of a given treatment.4 8 14 15 16 17 Because many adverse events are rare and are not typically the primary outcome of included studies, the search strategy, eligibility of study designs, and statistical methods might need to differ from those of systematic reviews that only address efficacy.

Adverse events are the primary outcome assessed in less than 10% of systematic reviews.11 12 13 14 In 1994, only five reviews retrieved from the Database for Abstracts of Reviews of Effects (DARE) and the Cochrane Database of Systematic Reviews (CDSR) were specifically designed to analyse an unintended effect of an intervention. While the absolute number has increased over time, the proportion has remained stable. In 2010, 104 reviews retrieved from CDSR and DARE evaluated adverse events exclusively, but the proportion of reviews of harms in comparison to efficacy reviews remained stable at 5% between 1994 and 2010.13

A systematic review12 of systematic reviews published between 2008 and 2011 retrieved from CDSR and DARE identified 296 DARE reviews and only 13 Cochrane reviews with a singular primary intent to measure adverse events of interventions. Even though systematic reviews increasingly try to consider all outcomes (both beneficial and harmful), data on adverse events may be more fragmented and incomplete, and given more cursory treatment than efficacy data. Even when reviews are exclusively designed to measure adverse events, identified reporting deficiencies have included: lack of a clear definition of the adverse event reviewed, lack of specification regarding study designs selected for inclusion, and no report on length of participants’ follow-up or measurement of any associated patient risk factors that could lead to the adverse event.12 We set out to extend the PRISMA guideline (Preferred Reporting Items for Systematic reviews and Meta-Analyses)1 to facilitate balanced reporting of benefits and harms.

Development of PRISMA harms

We followed the strategy developed by the EQUATOR Network22 and used for previous reporting guidelines including PRISMA statement,1 PRISMA for abstracts,23 and PRISMA for protocols.24 To document the need for a reporting guideline regarding adverse event reporting in systematic reviews, our team evaluated the reporting characteristics of reviews with adverse events as a primary outcome and found several areas (title, abstract, methods, results, and conclusion) that could benefit from more transparent reporting.12

A list of 37 potential new items for PRISMA harms was developed on the basis of preliminary findings of previous systematic reviews.12 25 These potential items were then compared against the original PRISMA statement to assess overlap and refine wording. The wording and content were further refined by the PRISMA harms steering committee.

After the development of a draft list of potential items, a modified Delphi26 process was initiated to obtain feedback from a broad spectrum of stakeholders. We surveyed 324 people through an online survey consisting of three rounds of participant feedback. Delphi participants were selected on the basis of their expertise in systematic reviews in general, and in particular reviews of adverse events. This selection was complemented by other content experts including methodologists, statisticians, epidemiologists, clinicians, journal editors, a consumer, and a member of a federal health regulatory agency (Health Canada). The list of potential items was sent to the participants in four weeks intervals. A total of 112 participants contributed to at least one of the three Delphi rounds; 56 participants completed more than one round. Delphi results led to one potential item excluded, eight items received scattered votes, and all remaining 28 items were voted relevant by the Delphi participants.

An in-person two day consensus meeting was held in Banff, Canada, in May 2012. It included 25 experts from seven countries with extensive experience in systematic reviews, adverse events research, and guideline development, and also included a consumer and a member of a health regulatory agency (Health Canada). Meeting participants had the results of the review identifying the current state of reporting in systematic reviews of harms12 and the results of the Delphi process to inform their discussion of relevant items to be included in this guideline extension.

Scope of PRISMA harms

A goal of the in-person consensus meeting was to define the PRISMA harms guideline’s applicability. After discussion, it was agreed that the modifications to the existing PRISMA statement should aim to improve the reporting of adverse events (defined in table 1 27 28 29) in systematic reviews, whether adverse events are a primary or secondary outcome.

Table 1

Glossary of terms

View this table:

Systematic reviews assessing harms may include observational studies exclusively or in addition to interventional studies.30 PRISMA harms items and recommendations for reporting harms in systematic reviews are applicable to observational studies (with and without a comparison group) and to prospective interventional studies if deemed to be included in the review.

This report is an extension of the PRISMA statement1 and, as such, should be used in addition to the original statement. The main goal of the four PRISMA harms items is to bring attention to a minimal set of items to be reported in any review assessing harms. The recommendation for reporting harms in systematic reviews provides elaboration and examples on how existing PRISMA items should be applied to improve reporting of harms in systematic reviews.

How to use the PRISMA harms checklist

The PRISMA harms checklist (table 2) contains at least four extension items that must be used in any systematic review addressing harms, irrespective of whether harms are analysed alone or in association with benefits. These include:

Table 2

PRISMA harms checklist

View this table:
  • • Item 1—title: specifically mention “harms” or other related terms, or the harm of interest in the systematic review.

  • • Item 14—synthesis of results: specify how zero events were handled, if relevant.

  • • Item 18—study characteristics: define each harm addressed, how it was ascertained (eg, patient report, active search), and over what time period.

  • • Item 21—synthesis of results: describe any assessment of possible causality.

These items are added to the original PRISMA statement, such that a systematic review addressing adverse events should report the PRISMA statement items and the PRISMA harms.

In this article, we provide explanation and guidance for the four PRISMA harms items, a minimum set of items to be reported in any review assessing harms. We also discuss recommendations for reporting harms in systematic reviews for those items in the original PRISMA checklist that require special consideration when reporting on harms. The recommendations are considered a desirable set, and relevant to harms publications in general. We include relevant examples of good reporting for all PRISMA harms items and recommendations for reporting harms in systematic reviews.

Item 1—title

PRISMA statement: “Identify the report as a systematic review, meta-analysis, or both.”

PRISMA harms extension: “Specifically mention ‘harms’ or other related terms, or the adverse event(s) of interest in the review.”

Example

“Perinatal mortality and other severe adverse pregnancy outcomes associated with treatment of cervical intraepithelial neoplasia: meta-analysis.” 31

PRISMA harms item explanation

The title should clearly reflect the objectives of the systematic review, be accessible to all readers, and provide a one line summary of the authors’ intention. If adverse events are part of the review’s primary objective, as a primary or secondary outcome, the title should state this clearly. It can name the specific adverse event under review or any generic harms related terms (for example, risk, complication, adverse effects, or adverse reaction). If the harm is a coprimary outcome (for example, the measurement of efficacy and harms), the title should indicate this.

Item 2—abstract

PRISMA statement: “Provide a structured summary including, as applicable: background; objectives; data sources; study eligibility criteria, participants, and interventions; study appraisal and synthesis methods; results; limitations; conclusions and implications of key findings; systematic review registration number.”

Example

“Background: Bisphosphonates are widely used in osteoporosis, but there have been concerns about a potential link between bisphosphonate therapy and atrial fibrillation. Objective: We aimed to systematically evaluate the risk of atrial fibrillation associated with bisphosphonate use. Methods: We searched MEDLINE, regulatory authority websites, pharmaceutical company trial registers and product information sheets for randomized controlled trials (RCTs) and controlled observational studies published in English through to May 2008. We selected RCTs of bisphosphonates versus placebo for osteoporosis or fractures, with at least 3 months of follow-up, and data on atrial fibrillation. For the observational studies, we included case control or cohort studies that evaluated the risk of atrial fibrillation in patients exposed to bisphosphonates compared with non-exposure. Data on atrial fibrillation as the primary outcome, and stroke and cardiovascular mortality as secondary outcomes, were extracted. Data Synthesis/Results: We calculated pooled odds ratio (OR) using random effects meta-analysis, and estimated statistical heterogeneity with the I2 statistic. Bisphosphonate exposure was significantly associated with risk of atrial fibrillation serious adverse events in a meta-analysis of four trial datasets (OR 1.47; 95% CI 1.01, 2.14; p=0.04; I2=46%). However, meta-analysis of all atrial fibrillation events (serious and non-serious) from the same datasets yielded a pooled OR of 1.14 (95% CI 0.96, 1.36; p=0.15; I2=0%). We identified two case-control studies, one of which found an association between bisphosphonate exposure (ever users) and atrial fibrillation (adjusted OR 1.86; 95% CI 1.09, 3.15) while the other showed no association (adjusted OR 0.99; 95% CI 0.90, 1.10). Both studies failed to demonstrate a significant association in ‘current’ users. We did not find a significant increase in the risk of stroke (three trial datasets; OR 1.00; 95% CI 0.82, 1.22; p=0.99; I2=0%) or cardiovascular mortality (three trial datasets; OR 0.86; 95% CI 0.66, 1.13; p=0.28; I2=31%). Conclusion: While there are some data linking bisphosphonates to serious atrial fibrillation, heterogeneity of the existing evidence, as well as paucity of information on some of the agents, precludes any definitive conclusions on the exact nature of the risk.”32

Recommendations for reporting harms in systematic reviews

As abstracts reach a broad audience, the abstract of a systematic review should be a clear summary of the review.33 Abstracts should report any analysis of harms undertaken in the review, if harms are a primary or secondary outcome, as recommended by PRISMA for abstracts.23

Item 3—introduction

PRISMA statement: “Describe the rationale for the review in the context of what is already known.”

Example

“Epilepsy has a prevalence of 5-10 persons/1000. During pregnancy, women with epilepsy cannot generally safely discontinue their antiepileptic therapy, and the risks to the unborn child from maternal antiepileptic medication need to be balanced against the risk of uncontrolled epilepsy both to the mother and the baby. In the last decade, an increasing number of studies have addressed the long-term safety of these drugs on child development, with conflicting results. Synthesizing these data into an overall risk assessment is critical for clinical counselling of women with epilepsy and their families. The objective of this study was to perform a systematic review of the literature pertaining to long-term neurodevelopment after in utero exposure to antiepileptic drugs (AEDs) and to conduct a meta-analysis to allow overall risk estimation.”34

Recommendations for reporting harms in systematic reviews

The introduction should inform the reader of the review’s overall goal and provide the rationale for the approach taken.1 Systematic reviews of harms can be designed with a narrow focus, evaluating a specific type of adverse event, or with a broad focus to evaluate all adverse events associated with a given intervention.28 The systematic review should clearly describe in the introduction or methods section which events are considered harms and provide a clear rationale for the specific harm(s), condition(s), and patient group(s) included in the review.

Item 4—objective

PRISMA statement: “Provide an explicit statement of questions being addressed with reference to participants, interventions, comparisons, outcomes, and study design (PICOS).”

Example

“Our objective was to systematically determine the comparative effects of rosiglitazone and pioglitazone on cardiovascular outcomes (myocardial infarction and congestive heart failure) and mortality from observational studies in patients with type 2 diabetes.”35

Recommendations for reporting harms in systematic reviews

The objective of a systematic review should be clearly stated, preferably at the end of the introduction.1 The PRISMA statement1 suggests the PICOS format (population, intervention, comparison, outcomes, and study design). Overall, the PICOS format should be specified, although in systematic reviews of harms the selection criteria for population, comparison, and outcomes can be broad. For example, the same intervention might have been used for heterogeneous indications in a diverse range of patients, such that the systematic review allows a broad range of comparisons to be included. Similarly, if a review is attempting to evaluate any or all possible harms (including new or unexpected events) associated with a given intervention, the potential outcomes (that is, adverse events) cannot be completely defined a priori (in the protocol phase) in detail.

Item 5—protocol and registration

PRISMA statement: “Indicate if a review protocol exists, if and where it can be accessed (eg, web address), and, if available, provide registration information including registration number.”

No specific additional information is required for systematic reviews of harms.

Item 6—eligibility criteria

PRISMA statement: “Specify study characteristics (eg, PICOS, length of follow-up) and report characteristics (eg, years considered, language, publication status) used as criteria for eligibility, giving rationale.”

Example

“We included studies with data on severe obstetric or neonatal outcomes in women treated for cervical intraepithelial neoplasia and in a control group of untreated women. Two types of treatment were considered: excisional procedures (cold knife conisation, large loop excision of the transformation zone, and laser conisation) and ablative procedures (laser ablation, cryotherapy, and diathermy).

Outcome measures: The severe adverse obstetric or neonatal events were perinatal mortality, severe (at less than 32/34 weeks’ gestation) and extreme (<28/30 weeks) preterm delivery, and severe low birth weight (<2000 g, <1500 g, and <1000 g).

. . . There was no language restriction. Three authors . . . verified inclusion and exclusion criteria independently and reached consensus in case of discordance.

. . . We contacted authors to obtain data on outcomes by particular treatment procedure if they were not provided in the original reports. In addition, we collected data on the study design and matching criteria applied for the selection of a control group of non-treated women.”31

Recommendations for reporting harms in systematic reviews

Population and patient characteristics are important when considering harms and should be clearly reported. The review authors should report how they handled relevant studies when the outcomes of interest were not reported (that is, the primary study did not mention adverse events, but it was a relevant study based on the population, intervention and comparator). Explicit systematic review methods will provide readers with important information on whether the review might be affected by missing outcome data or missing studies.

Also relevant is transparent reporting regarding the review authors’ choices for specific study designs (for example, limiting the review to randomised controlled trials v including other study designs), if specific study designs were chosen specifically to address harms. Depending on the characteristics of the adverse event under investigation, different study designs have different strengths and weaknesses, and accordingly, appropriate study designs should be reported for the particular outcome of interest.16

Whatever methods are chosen by review authors to determine which studies are included should be explicit, allowing readers to better understand how adverse events were identified and extracted from the included studies.

Item 7—information sources

PRISMA statement: “Describe all information sources (eg, databases with dates of coverage, contact with study authors to identify additional studies) in the search and date last searched.”

Example

“Systematic literature searches were conducted in the following electronic databases, all from their respective inception until February 2008 and without any language restrictions: PubMed via Medline, AMED, EMBASE, CINAHL and the Cochrane Library - the Cochrane Central Register of Controlled Trials (CENTRAL). The search terms were the common names(s), scientific names(s) and synonyms for S. repens . . . No limits were placed on the search function. Further relevant data were retrieved by hand searching the reference lists of identified papers and searching our files at the Complementary Medicine department, Peninsula Medical School, Universities of Exeter and Plymouth, Exeter, UK.

Additional data were requested from the following reporting schemes: Adverse Drug Reactions Advisory Committee (ADRAC), Australia; Bundesinstitut fur Arzneimittel und Medizinprodukte (BfArM), Germany; US Food and Drug Administration (FDA); and the Medicine and Healthcare products Regulatory Agency (MHRA), UK. The WHO Collaborating Centre for International Drug Monitoring, Uppsala, Sweden (WHO-UMC) was also requested to provide the total numbers of adverse events reports received up until September 2007 involving the use of S. repens. Twenty-four manufacturers/distributors of S. repens preparations were identified from a review, standard text and from Internet searches. They were contacted and asked for adverse event reports and any other safety information held on file. Four herbalist organizations (British Herbal Medicine Association, UK; European Herbal & Traditional Medicine Practitioners Association, UK; European Scientific Cooperative on Phytotherapy, UK; National Institute of Medical Herbalists, UK) were also contacted for relevant information.”36

Recommendations for reporting harms in systematic reviews

Review authors should be explicit if they only searched for published data, or also sought data from unpublished sources, authors, drug manufacturers, and regulatory agencies. Published data can differ substantially from unpublished data for various reasons, especially in relation to harms.37 38 39 40 41 If a systematic review includes unpublished data, a clear description should be provided of the source and the process of obtaining it.

Item 8—search

PRISMA statement: “Present full electronic search strategy for at least one database, including any limits used, such that it could be repeated.”

Example

“We searched PubMed, Embase, and CENTRAL using the terms: ‘angiotensin receptor blockers’, ‘angiotensin receptor antagonists’, ‘ARBs’, and the names of individual angiotensin receptor blockers in humans until August 2010. Appendix 1 on bmj.com gives details of the search and the MeSH terminologies used. We checked the reference lists of review articles, meta-analyses, and original studies identified by the electronic searches to find other eligible trials. There was no language restriction for the search. Authors of trials were contacted when results were unclear or when relevant data were not reported. In addition, we searched Food and Drug Administration (FDA) dockets by hand searching all documents submitted for drug approval/labelling change as well as the minutes from FDA meetings available on the FDA website.”42

Recommendations for reporting harms in systematic reviews

If additional searches were used specifically to identify adverse events, authors should present the full search process so it can be replicated. Reviews of harms might have different search methods and study selection criteria from reviews of efficacy. Reviews of efficacy have their search strategy and screening based on the efficacy question (does the intervention provide the intended effect, for example, cure the disease?) and often cannot deal with harms adequately (for example, what are the unintended effects (harms) associated with the intervention?). Reviews of harms might need a distinct database filter and data source searches.16 30 43 44 45 46

Item 9—study selection

PRISMA statement: “State the process for selecting studies (ie, screening, eligibility, included in systematic review, and, if applicable, included in the meta-analysis).”

Example

“Inspection of citations: After duplicate citations were removed, all titles and abstracts were independently reviewed by two reviewers . . . with reference to the inclusion/exclusion criteria (Supplementary Table S3), and a decision was made whether to retrieve the full report of the study. The number of titles/abstracts identified, accepted, and rejected was recorded.

Inspection of retrieved reports: Once the full reports were retrieved, they were inspected for relevance to the review and the inclusion and exclusion criteria applied. Studies not meeting the predetermined criteria were excluded. If there was any disagreement about whether to include any of the studies, a third reviewer ( . . . assessed them and, together with the other reviewers, made a consensus decision about whether to include or exclude. A record was made of the number of papers retrieved and the number of papers excluded.”47

Recommendations for reporting harms in systematic reviews

Review authors should specify whether only studies reporting on adverse events were included in the review, as noted previously in item 6. If the systematic review only includes studies reporting on adverse events of interest, it should be clearly defined whether screening was based on adverse event reporting in the title or abstract or in the full text. Harms are especially poorly reported in titles and abstracts, leading to the potential exclusion of relevant studies.27 Moreover, studies might not report harms in the full text, even though such data were collected, thus additional relevant data may only be obtained on request.

Item 10—data collection process

PRISMA statement: “Describe method of data extraction from reports (eg, piloted forms, independently, in duplicate) and any processes for obtaining and confirming data from investigators.”

No specific additional information is required for systematic reviews of harms.

Item 11—data items

PRISMA statement: “List and define all variables for which data were sought (eg, PICOS, funding sources) and any assumptions and simplifications made.”

Example

“Anti-TNF groups were divided into drug and dose categories. Dose was defined according to the recommended maintenance dose from the product labelling. Only recommended and high doses were considered in the analysis. The number of subjects experiencing death or at least one serious adverse event or serious infection was extracted for each treatment group. Extracting malignancy data from published clinical trial manuscripts requires caution as there is considerable variation in reporting, especially in the reporting of carcinoma in situ and non melanoma skin cancers. As manuscripts may aggregate malignancies differently, malignancies were allocated to three classes allowing for comparisons of similar outcomes: lymphomas; non-melanoma skin cancers and the composite endpoint of non-cutaneous cancers and melanomas. If a subject presented with two types of cancer, the cancers were allocated as a single event in the following order of priority: lymphoma, non-cutaneous cancer/melanoma, non-melanoma skin cancer. When the number of events instead of the number of subjects experiencing an event was reported, an assumption of one event per subject was made. All data were abstracted as reported in the publications. If an event described in a publication could not be allocated to a particular time or treatment group other sources of information were used. All data were compiled by two authors (TRE and JPL) and disagreements were resolved by consensus.”48

Recommendations for reporting harms in systematic reviews

Categorisation—Harms may be categorised in a heterogeneous fashion by primary study authors. For example, when investigating haemorrhagic stroke, some primary studies might report a combination of events under “neurological events,” others might report them under “cardiovascular events,” and few might report them as “stroke” but not subdivide further (that is, haemorrhagic or ischaemic). Also, many harms can have different severities, and the definitions of seriousness used can vary between studies. These issues should preferably be considered at the protocol phase. All operational definitions used to classify adverse events under review should be explicitly identified by review authors.

Events/participants—Studies usually report on the number of events but these might not accurately reflect the number of participants experiencing the event. For example, the same patient might have angina, followed by myocardial infarction, and finally cardiovascular related death. Studies might report these three events as isolated findings (angina, myocardial infarction, death), but they all occurred in one participant during the study. Participants might also experience the same event multiple times. Review authors should report if multiple events occurred in the same individuals, if this information is available.

Factors associated with the event—When reporting on adverse events, consider whether the incidence of adverse events is related to factors associated with participants (such as age, sex, use of drug treatments) or provider (such as years of practice, level of training). Review authors should specify whether such information was extracted and how it was used in subsequent results.

Measurement—Different methods used to measure harms could lead to different results. Active methods (actively seeking information about harms) are associated with more reported events than passive methods (waiting for patients to report them).29 The timing and frequency of adverse events measurement is also important. For example, measurement might be done only at the end of the study intervention (when participants might not accurately recall how they felt during the entire course of the study) or done at regular intervals throughout the treatment period.4 9 Review authors should specify if they extracted details regarding the specific methods used to capture harms.

Reporting—Poor/unclear reporting in primary studies should be anticipated and the approach used to overcome them included in the systematic review protocol.

Item 12—risk of bias in individual studies

PRISMA statement: “Describe methods used for assessing risk of bias of individual studies (including specification of whether this was done at the study or outcome level), and how this information is to be used in any data synthesis.”

Example

“Quality assessment: For RCTs that compared dexamethasone with another intervention and reported haemorrhage rate or for which haemorrhage rate data were obtained from the author(s), the methodological quality was assessed using the Cochrane Collaboration’s domain-based evaluation tool for assessing risk of bias. The methodological quality of haemorrhage rate recording and reporting was assessed for both randomized and NRS using selected elements of the McMaster Quality Assessment Scale of Harms for primary studies (the McHarm Scale), http://hiru.mcmaster.ca/epc/mcharm.pdf. The elements used were selected based on an evaluation of their relevance to our research question and they aimed to evaluate: the quality and appropriateness of study design and reporting, the applicability of the study findings to the population, and measures taken to reduce bias.”47

Recommendations for reporting harms in systematic reviews

Studies that are well designed to assess efficacy of an intervention may not necessarily preserve the same qualities when assessing harms.26 29 The risk of bias assessment should be considered separately for outcomes of benefit and harms.

Item 13—summary measures

PRISMA statement: “State the principal summary measures (eg, risk ratio, difference in means).”

No specific additional information is required for systematic reviews of harms.

Item 14—synthesis of results

PRISMA statement: “Describe the methods of handling data and combining results of studies, if done, including measures of consistency (eg, I2) for each meta-analysis.”

PRISMA harms extension: “Specify how zero events were handled, if relevant.”

Example

“In cases when only one study reported an adverse event, relative risks (RR) were calculated with the corresponding 95% confidence intervals (95% CI) using the data extracted. When in one cell there were zero cases, 0.5 was added to all four cells of a 2×2 table.”49

PRISMA harms item explanation

Because harms are often rare events, it is common to find studies reporting no instances of the specific harm. This situation would require systematic reviewers to consider relevant statistical issues, ideally a priori and documented in the systematic review protocol. The review authors should clearly report the steps taken to overcome problems associated with studies including zero events in one or more groups, in meta-analysis.50 51 52 53 Harms are typically not the primary outcome of studies evaluating efficacy. Review authors should plan and specify how they will deal with studies not reporting on harms of interest, studies reporting a general statement indicating the absence of the event (for example, “no serious harms were identified in any group,” but with no definitions of seriousness), or studies not reporting on any adverse events. The review authors should clarify if there was an absence of events or an absence of reporting. They should specify if the situation of “no events reported” was treated as “zero events” or handled in some other manner. Additionally, the review authors should report whether any effort was made to clarify ambiguity with the authors of the primary studies.

Item 15—risk of bias across studies

PRISMA statement: “Specify any assessment of risk of bias that may affect the cumulative evidence (eg, publication bias, selective reporting within studies).”

Example

“Munshi et al. did not report the non-statistically significant adjusted odds ratio for the risk of hypoglycaemia in those with cognitive impairment. The incomplete reporting of null results means that our meta-analyses could potentially have overestimated the strength of the association between hypoglycaemia and cognitive impairment.”54

Recommendations for reporting harms in systematic reviews

There is mounting evidence that the risk of reporting bias in relation to harms is more frequent than bias in relation to efficacy outcomes.55 56 57 An analysis of systematic reviews from the Cochrane Collaboration found that the single primary harm outcome was inadequately reported in 76% of the studies included and not reported in 47% of reviews outside Cochrane that have been designed specifically to measure harms.56 A similar study of reviews of efficacy found that 31% of trials did not report on the primary benefit outcome.57 There is also evidence that those reporting on primary studies might choose to play down the estimates of harms and emphasise the efficacy of the intervention instead.56 57 58 Selective outcome reporting and publication bias (where entire studies are unpublished due to the unexpected findings of harms) could therefore work in different directions from that observed with efficacy trials where benefits are emphasised and harms are neglected or distorted.4 56 57 58 59 60 61 62 63

When statistical approaches are used to probe for the possibility of biased reporting, they should be explicitly described and used with caution. It is a common misconception that tests (such as funnel plots) can confirm that publication bias does not exist.64 65 Assessments of risk of bias across studies should focus more on presenting the extent of missing information (studies without harms outcomes), any factors that might account for their absence, and whether these reasons may be related to the results of the harms outcomes.66

Item 16—additional analyses

PRISMA statement: “Describe methods of additional analyses (eg, sensitivity or subgroup analyses, meta-regression), if done, indicating which were prespecified.”

Example

“Additional predefined sensitivity analyses were done to explore the influence on effect size of different doses of tiotropium (10 µg v 5 µg), the effect of trial duration, and the influence of individual trials. We estimated the annualized number needed to treat for mortality associated with tiotropium mist inhaler by applying the pooled relative risk from the meta-analysis to the average control event rate in the long term trials . . . The number needed to treat for mortality is the number of patients with chronic obstructive pulmonary disease treated with tiotropium mist inhaler for one year, rather than placebo, associated with one additional death.”67

Recommendations for reporting harms in systematic reviews

Sensitivity analyses might be affected by different definitions, grading, and attribution of adverse events, because adverse events are typically infrequent or reported using heterogeneous classifications (see item 11). If factors associated with harms are investigated, review authors should report the number of participants and studies included in each subgroup.68

Item 17—study selection

PRISMA statement: “Give numbers of studies screened, assessed for eligibility, and included in the review, with reasons for exclusions at each stage, ideally with a flow diagram.”

Example

“The initial database searching identified 5,062 articles. Review of the abstracts/titles and exclusion of irrelevant/duplicate articles yielded 1,034 articles. Of these articles, we excluded 896 for the documented reasons. We therefore included 138 studies, along with an additional five studies identified through other searching; a total of 143 studies (see Additional file 3 for References to all included reports). In the case of multiple publications from the same study, we included the report with the most relevant data relating to adverse effects.”69

Recommendations for reporting harms in systematic reviews

If a review addresses both efficacy and harms, it can be useful to display a flow diagram specific for each, so the reader can have a clear idea of how many studies were included and excluded for the efficacy outcome and the harms outcome (example shown in fig 1 69).

Figure1

Fig 1 Flow diagram, relating to example in item 17 (study selection) of the PRISMA harms checklist. Figure adapted from reference 69. *Numbers indicate level of evidence

Item 18—study characteristics

PRISMA statement: “For each study, present characteristics for which data were extracted (eg, study size, PICOS, follow-up period) and provide the citations.”

PRISMA harms extension: “Define each harm addressed, how it was ascertained (eg, patient report, active search), and over what time period.”

Example

Figure 2 presents an example on item 18 (study characteristics).70

Figure2

Fig 2 Characteristics of included studies, relating to example in item 18 (study characteristics) of the PRISMA harms checklist. Figure adapted from reference 70

PRISMA harms item explanation

Reporting the characteristics of included studies is important to allow readers to assess the validity and generalisability of the results. Because harms are typically not reported or measured in a standardised format, we suggest reporting the following for every included study, which could be combined with the study characteristics or presented separately:

  • • Definitions for specific adverse event

  • • The method of adverse events ascertainment—that is, if passive methods (patients reported a harms as emerged), or active methods (harms actively sought) were used

  • • The method of measurement—that is, if any validated tool was used to measure them, along with appropriate reference to its validation

  • • How severity or seriousness was measured.

Recommendations for reporting harms in systematic reviews

The PRISMA statement suggests following the PICOS format for this item. For reviews of adverse events, we would add additional characteristics as follows:

  • • Population: patient risk factors that were considered as possibly affecting the risk of the harm outcome

  • • Intervention: any relevant professional expertise or skills (for example, if the intervention is a procedure)

  • • Time: timing of all harms assessments and the length of follow-up. The timing of assessment of harms will vary across studies and these differences are important to document.

Review authors should also clarify whether studies were selected based on the length of follow-up. Some adverse outcomes may take longer to occur than the usual time required to measure efficacy of an intervention (for example, hospital readmission for total hip replacement). An appropriate interval for follow-up should be specified a priori for each type of adverse event evaluated by the review author, preferably during the protocol development. If the timing of the collection of outcomes in the primary study is insufficient (relative to the time interval necessary for the outcome to occur), then the number of events may be underestimated.1 12

Item 19—risk of bias within studies

PRISMA statement: “Present data on risk of bias of each study and, if available, any outcome level assessment (see item 12).”

Example

Figure 3 presents an example for item 19 (risk of bias within studies).71

Figure3

Fig 3 Risk of bias in individual studies, relating to example in item 19 (risk of bias within studies) of the PRISMA harms checklist. Figure adapted from reference 71

Recommendations for reporting harms in systematic reviews

Review authors should consider the possible sources of bias that could affect the specific harm under consideration within the review. The study designs may be considered ideal for efficacy measurement (for example, in terms of standard indicators for risk of bias such as concealment of allocation, sequence generation, and double blinding). However, consideration of the sample selection, dropouts, and measurement of adverse events should be evaluated separately from the outcomes of benefit as described in item 12.

Item 20—results of individual studies

PRISMA statement: “For all outcomes considered (benefits or harms), present, for each study: (a) simple summary data for each intervention group (b) effect estimates and confidence intervals, ideally with a forest plot.”

Example

Figure 4 presents an example on item 20 (results of individual studies).72

Figure4

Fig 4 Forest plot of adverse events in comparison of metamizole versus non-steroidal anti-inflammatory drugs, relating to example in item 20 (results of individual studies) of the PRISMA harms checklist. Figure adapted from reference 72. Adverse events categorised according to the International Classification of Primary Care. Results from single studies are of limited interpretability but are displayed for the sake of completeness. RR=risk ratio; AE=adverse events; SAE=serious adverse events

Recommendations for reporting harms in systematic reviews

It is especially important that review authors report the actual numbers of adverse events in each study, separately for each intervention. This can pose a challenge if studies report adverse events in heterogeneous formats (that is, proportions, or frequency beyond a specific threshold (not reporting infrequent harm outcomes)). Study authors could be contacted for clarification. Reporting of this item (item 20) can be combined with information for each study on how adverse events were assessed (item 18). Graphical display in a forest plot is often useful, even when the data are not combined in a meta-analysis (such as significant heterogeneity).68

Item 21—synthesis of results

PRISMA statement: “Present results of each meta-analysis done, including confidence intervals and measures of consistency.”

PRISMA harms extension: “Describe any assessment of possible causality.”

Example

“We reviewed all adverse drug events reported through MedWatch or those submitted by the manufacturer from November 1997 to April 2008 through the Freedom of Information Act (FOIA) request. The FDA provided a full-text summary of 5944 reports involving oral, intramuscular and IV use of haloperidol. The FDA data were transferred to a Microsoft Access database and screened for the key terms torsade, QT, prolongation, wave. Incident report number, date of report, age, gender, origin of report, medication name, role of drug as categorized by the FDA (suspect, concomitant, primary suspect, secondary suspect), route, dose, units, duration, symptoms and FDA outcome category (death, life threatening, hospitalization initial or prolonged, disability, congenital anomaly, required intervention to prevent permanent damage, other) were recorded. Only those reports in which IV haloperidol was considered by the reporter to be the primary causative agent for the adverse event were reviewed. Available information included diagnosis, laboratory parameters, QTc measurement, cardiac symptoms, outcomes and a description of recovery. No peer review was applied to the MedWatch reports and the data reported in this publication reflect the original information from the FDA MedWatch database.”73

PRISMA harms item explanation

The assessment of causality is important when reviewing harms, it should be done in the light of the dataset obtained, and if it was limited to instances where the relation between intervention and adverse reaction was judged as “related,” “probable,” or “possible,” and how these categories are defined. Review authors should report whether causality was assessed, how it was determined, and the definitions used to establish causality, such as Bradford Hill’s principles or the World Health Organization causality assessment tool.74 75

Recommendations for reporting harms in systematic reviews

When a rare outcome is being reviewed, the synthesis of results could make a major difference to the results and inferences drawn.76 77 78 The author should clearly report the number of comparisons made and the reasons for their choices. If data from unpublished sources are included, report clearly the data source and the effect of these studies on the results of the systematic review.

Item 22—risk of bias across studies

PRISMA statement: “Present results of any assessment of risk of bias across studies (see item 15).”

No specific additional information is required for systematic reviews of harms. See item 15.

Item 23—additional analysis

PRISMA statement: “Give results of additional analyses, if done (eg, sensitivity or subgroup analyses, meta-regression (see item 16)).”

No specific additional information is required for systematic reviews of harms.

Item 24—summary of evidence

PRISMA statement: “Summarise the main findings including the strength of evidence for each main outcome; consider their relevance to key groups (eg, healthcare providers, users, and policy makers).”

No specific additional information is required for systematic reviews of harms.

Item 25—limitations

PRISMA statement: “Discuss limitations at study and outcome level (eg, risk of bias), and at review level (eg, incomplete retrieval of identified research, reporting bias).”

Example

“This systematic review and meta-analyses demonstrated that testosterone therapy in men was associated with significant increases in hemoglobin and hematocrit . . . However, caution should be exercised in interpreting these analyses because they are considered observational in nature (despite the fact that the original studies were randomized) and the associations found can be attributable to chance due to the multiple simultaneous comparisons. Subgroup interactions generated by study-level meta-analyses are considered hypothesis- generating and should be confirmed at a patient-level (in a large trial or individual patient meta-analysis) before clinical implications are inferred.

. . . Nevertheless, the quality of the evidence varied from low to medium considering the imprecision (small number of events), heterogeneity (for the outcomes of cardio metabolic risk factors, hemoglobin and hematocrit), and methodological limitations of the included trials. In particular, the brief duration of most testosterone trials limited inferences about the long term safety of this treatment. In addition, publication and reporting biases likely affected the inferences in this review because not all studies reported the outcomes of interest.”79

Recommendations for reporting harms in systematic reviews

Issues of missing data and heterogeneity in data collection or definitions of harms are common limitations of reviews addressing harms. It is important to recognise possible limitations of meta-analysis for rare adverse events (that is, quality and quantity of data), issues noted previously related to collection and reporting.

Item 26—conclusions

PRISMA statement:“Provide a general interpretation of the results in the context of other evidence, and implications for future research.”

Example

“Further complicating the interpretation of our findings is that few high-quality studies have compared the safety profiles of other induction agents. None of the induction agents used in the ED are devoid of potential adverse effects. Our study did not examine immediate adverse events such as hemodynamic and cardio depressant adverse effects and factors such as ease of tracheal intubation and simplicity of use that also need to be taken into consideration when choosing the most appropriate induction agent for a given patient.

The available evidence suggests that etomidate suppresses adrenal function transiently, without demonstrating a significant effect on mortality. However, to our knowledge no studies to date have been powered to detect a difference in mortality or time in the hospital, the ICU, or receiving ventilator support. According to robust evidence that etomidate transiently decreases adrenal function and that a significant effect on mortality cannot be excluded, alternate induction agents may be considered for use in rapid sequence intubation, particularly for septic patients. More data are needed to determine etomidate’s effect on mortality.”80

Recommendations for reporting harms in systematic reviews

It is important to state conclusions in accordance with the review findings. Not infrequently, the review author states the limitations and weaknesses of the included studies regarding adverse event reporting in the discussion (for example, high risk of bias, poor reporting of adverse events), but the conclusions fail to represent these weaknesses. Because of poor quality and quantity of data on harms, it is often not possible to draw strong conclusions. In particular, when adverse events were not identified, we caution against the conclusion that the intervention is “safe,” when, in reality, its safety remains unknown.

Item 27—funding

PRISMA statement: “Describe sources of funding for the systematic review and other support (eg, supply of data); role of funders for the systematic review.”

No specific additional information is required for systematic reviews of harms.

Discussion

The PRISMA harms add to PRISMA because systematic reviews of harms differ from reviews of efficacy. Important differences in search strategy, screening, assessment of bias, and analysis must be considered when reviewing unintended effects of interventions.33 43 44 45 46 56 57 As harms are often rare, “zero” is an important value. PRISMA harms clarifies the importance of distinguishing between the absence of an event, an event that was not measured, or an event that was not reported.12 Even when harms are reported, different formats can make it difficult to pool and summarise data across studies, especially if primary reports did not have a comparison group. PRISMA harms encourage transparent reporting of how harms were evaluated, including any causality assessment. Although a systematic review cannot remedy deficiencies in primary studies, the review authors have a unique opportunity to access and evaluate the entire evidence base, and both strengths and deficits in primary studies should be highlighted in the systematic review.12 56

Systematic reviews can present a misleading picture to readers if the lack of evidence of harm is presented as evidence of safety. Inclusion of studies that did not report on harms can be as frequent as 75% in Cochrane reviews and 47% in non-Cochrane reviews.56 Special attention should be given to systematic reviews that underpin the development of treatment guidelines, because harms can be misrepresented and under-reported in those reviews. Treatment guidelines are ultimately the final pathway to translate research findings and should represent efficacy and harms equally. Poor reporting in research has serious implications for waste in healthcare research investments because it requires duplication of experiments.81 82 83 84 There is strong evidence that poor reporting of efficacy—and potentially of harms, in animal studies, clinical studies, and systematic reviews—lead to misinterpretation of research results and ultimately contributes to poor patient care.12 56 57 58 81 82 83 84 85

PRISMA harms have been designed to help promote transparency in reporting. The guidance extension asks authors to describe in particular what was done (methods) and what was found (results), and encourages reporting of reasons for methodological choices made. Peer reviewers and journal editors can also benefit from implementing PRISMA harms, because it clarifies particular issues that affect reviews addressing adverse events. Improved reporting of benefits and harms can facilitate evidence informed decision making by healthcare professionals, policy makers, and patients.

As an extension of the PRISMA statement, PRISMA harms should be used in every systematic review assessing adverse events as a primary or secondary outcome. The four essential PRISMA harms items are a minimum set to be reported to provide transparency. We strongly suggest that the “recommendation for reporting harms in systematic reviews” relating to the main PRISMA checklist should also be used to report every systematic review assessing adverse events, to promote complete reporting. Like all reporting guidelines, PRISMA harms will be improved through critical review and feedback. Its goal is not only to improve reporting, but also to stimulate an increase in the number of reviews addressing harms, promoting a balanced assessment of health interventions.

We encourage journals that already endorse the PRISMA statement to extend endorsement to include PRISMA harms, and other journals to endorse both documents. Better reporting of systematic reviews depends not on endorsement alone but on adherence to the recommendations, ultimately leading to transparency of information and improvement of patient care.

Footnotes

  • PRISMA harms steering committee: Sunita Vohra (Convenor) (University of Alberta), David Moher (Ottawa Hospital Research Institute; University of Ottawa), Doug Altman (University of Oxford), Yoon Loke (University of East Anglia), John Ioannidis (Stanford University), Pasqualina Santaguida (McMaster University), Su Golder (University of York), Liliane Zorzela (University of Alberta). All authors participated in the development of this guideline, collaborating in the draft of the manuscript and approved the submitted version.

  • PRISMA harms group (collaborators): Heather Boon (University of Toronto), Jocalyn Clark (University of Toronto, at time of work on PRISMA harms senior editor, PLOS Medicine), Sheena Derry (University of Oxford), Jim Gallivan (Health Canada), Paula Gardiner (Boston University School of Medicine), Peter Gøtzsche (Nordic Cochrane Center), Elizabeth Loder (acting head of research, The BMJ), Maryann Napoli (Center for Medical Consumers), Karen Pilkington (University of Westminster), Paul Shekelle (Veterans Affairs Greater Los Angeles Health Care System), Sonal Singh (Johns Hopkins University School of Medicine), Claudia Witt (University Zurich), Toby Lasserson (Cochrane Editorial Unit), Taixiang Wu (Chinese Clinical Trial Registry, Chinese Cochrane Center), Larissa Shamseer (Ottawa Hospital Research Institute; University of Ottawa), Cynthia Mulrow (senior deputy editor, Annals of Internal Medicine)

  • Competing interests: None declared.

  • Funding: PRISMA harms was partly funded by Alberta Innovates, Health Solutions. The researchers were fully independent from funders.

  • Ethical approval: Approval granted by the University of Alberta (no Pro00021294).

  • Provenance and peer review: Not commissioned; externally peer reviewed.

References

View Abstract