Original Article
Addressing continuous data measured with different instruments for participants excluded from trial analysis: a guide for systematic reviewers

https://doi.org/10.1016/j.jclinepi.2013.11.014Get rights and content

Abstract

Background

We previously developed an approach to address the impact of missing participant data in meta-analyses of continuous variables in trials that used the same measurement instrument. We extend this approach to meta-analyses including trials that use different instruments to measure the same construct.

Methods

We reviewed the available literature, conducted an iterative consultative process, and developed an approach involving a complete-case analysis complemented by sensitivity analyses that apply a series of increasingly stringent assumptions about results in patients with missing continuous outcome data.

Results

Our approach involves choosing the reference measurement instrument; converting scores from different instruments to the units of the reference instrument; developing four successively more stringent imputation strategies for addressing missing participant data; calculating a pooled mean difference for the complete-case analysis and imputation strategies; calculating the proportion of patients who experienced an important treatment effect; and judging the impact of the imputation strategies on the confidence in the estimate of effect. We applied our approach to an example systematic review of respiratory rehabilitation for chronic obstructive pulmonary disease.

Conclusions

Our extended approach provides quantitative guidance for addressing missing participant data in systematic reviews of trials using different instruments to measure the same construct.

Introduction

What is new?

  • Specific guidance for addressing missing participant data for continuous outcomes measured with different instruments in meta-analyses is currently unavailable.

  • We developed an approach consisting of converting units from different instruments to the units of a reference instrument and applying four increasingly stringent data imputation strategies to address this issue.

  • We provide guidance regarding the importance of and the methods for calculating the proportion of patients who have a clinically important effect and detailed guidance for judging the impact of risk of bias as a result of missing participant data.

Randomized controlled trials (RCTs) often suffer from missing participant data [1]. Missing participant data increase risk of bias in both individual trials and meta-analyses. This is especially concerning in positive trials (ie, those with a significant treatment effect) if, in the intervention group, the outcomes of participants with missing data are worse than the outcomes of those with available data.

The Cochrane Collaboration Handbook has proposed a strategy for handling missing participant data for dichotomous outcomes in systematic reviews. The strategy suggests conducting a complete (available) case analysis complemented by sensitivity analyses of various assumptions regarding outcomes of participants with missing data [2]. One of the most common approaches is to adopt the “worst-case scenario,” in which one assumes that participants with missing data in the intervention group had the worst outcome and those in the control group had the best possible outcome [2]. Although this assumption tests the robustness of the pooled estimates for complete-case analyses, it is typically implausible [1].

Our research group has proposed additional strategies that use a range of plausible assumptions that increasingly challenge the results [1]. These strategies make the assumption that those in the intervention group with missing data do relatively worse than those with available data and those in the control group with missing data do relatively better than those with available data. We have applied these strategies to RCTs [1] and systematic reviews [3] reporting dichotomous outcomes.

Until recently, no methods were available (including the Cochrane Handbook) for addressing missing participant data for continuous outcomes in systematic reviews or for assessing its impact on the confidence in the estimate of effect. We have addressed this gap by extending our work from dichotomous outcomes [3] and proposing an approach involving conducting a complete-case analysis as the primary analysis complemented by sensitivity analyses that apply a series of increasingly stringent assumptions about results in patients with missing continuous outcome data. These sensitivity analyses test the robustness of the results of the primary analysis, allowing reviewers to judge the risk of bias associated with missing participant data in individual trials of the systematic review [4]. This approach is limited to systematic reviews in which all trials used the same measurement instrument. In this article, we extend the approach to systematic reviews pooling trials using different instruments to measure the same construct [eg, dyspnea, fatigue, emotional function, health-related quality of life (HRQoL)]. To further enhance interpretability, we illustrate how to calculate important treatment effects from the derived estimates using the minimally important difference (MID), the smallest difference that patients perceive as important [5], [6] and, in our discussion, show how the results can be interpreted in the context of a clinical practice guideline.

Section snippets

Methods

In developing our initial approach, we reviewed the available literature (including the Cochrane Handbook) on missing participant data [1], [2], [7], [8], [9] and conducted an iterative consultative process involving the nine authors of this article, including clinical epidemiologists, methodologists, and biostatisticians.

Our approach addresses analyses for aggregate trial-level data for conducting a meta-analysis and not analyses of individual participant data meta-analyses or missing

Results

Our proposed approach consists of the following steps:

  • 1.

    Choosing the reference measurement instrument,

  • 2.

    Converting scores from different instruments to the units of the reference instrument,

  • 3.

    Imputing measures of effect and their precision,

  • 4.

    Combining observed and imputed data,

  • 5.

    Special cases,

  • 6.

    Calculating the proportion of patients who have an important treatment effect, and

  • 7.

    Judging the impact of missing participant data on quality of evidence (confidence in the complete-case estimate of effect).

Discussion

We developed an approach to address the impact of missing participant data on risk of bias in meta-analyses of continuous variables that use different instruments to measure the same construct. This approach involves first choosing the reference instrument (the instrument with the best combination of an established anchor-based MID, excellent measurement properties, and high familiarity to the target audience) and converting scores to units of the reference instrument. The next step involves a

Acknowledgments

Authors' contributions: S.E. and G.H.G. conceived the study. S.E., E.A.A., R.A.M., X.S., S.D.W., D.H-.A., P.A-.C., B.C.J., and G.H.G. contributed to the study design and developed the approach. S.E. and B.C.J. extracted the outcome estimates, missing data, and missingness mechanisms from the individual studies included in the example systematic review. S.E. applied the approach to the example systematic review. S.E. completed the first draft of the manuscript. S.E. is the first author and

References (41)

  • A.L. Ries et al.

    Pulmonary rehabilitation: joint ACCP/AACVPR evidence-based clinical practice guidelines

    Chest

    (2007)
  • E.A. Akl et al.

    LOST to follow-up Information in Trials (LOST–IT): potential impact on estimated treatment effects

    BMJ

    (2012)
  • Further issues in meta-analysis: intention to treat issues. The Cochrane Collaboration open learning material

    (2012)
  • E.A. Akl et al.

    Addressing dichotomous data for participants excluded from trial analysis: a guide for systematic reviewers

    PLoS One

    (2013)
  • H.J. Schunemann et al.

    Commentary–goodbye M(C)ID! Hello MID, where do you come from?

    Health Serv Res

    (2005)
  • B.C. Johnston et al.

    Improving the interpretation of quality of life evidence in meta-analyses: the application of minimal important difference units

    Health Qual Life Outcomes

    (2010)
  • R.H.H. Groenwold et al.

    Dealing with missing outcome data in randomized trials and observational studies

    Am J Epidemiol

    (2011)
  • J.P.T. Higgins et al.

    Imputation methods for missing outcome data in meta-analysis of clinical trials

    Clin Trials

    (2008)
  • Chapter 8: risk of bias

  • D. Moher et al.

    CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials

    BMJ

    (2010)
  • Cited by (56)

    • Comparative efficacy and acceptability of cognitive behavioral therapy delivery formats for insomnia in adults: A systematic review and network meta-analysis

      2022, Sleep Medicine Reviews
      Citation Excerpt :

      When these data were not available, we imputed SDs from other studies included in our network meta-analysis [42]. If different insomnia severity rating scales were used, we chose the standard ISI (score ranges from 0 to 28) as the reference measurement instrument, and we converted mean scores and SDs from other instruments to the units of the reference instrument using the formula described by Ebrahim et al. [43] When studies reported time as hours, we transformed hours into minutes. Appendix Text S2 presents details of the missing SDs imputed for each outcome.

    • Efficacy and acceptability of psychotherapeutic and pharmacological interventions for trauma-related nightmares: A systematic review and network meta-analysis

      2022, Neuroscience and Biobehavioral Reviews
      Citation Excerpt :

      When exploring the effects of baseline nightmare symptoms, different studies may use different tools to evaluate nightmares. We chose CAPS nightmare items as the reference measurement tool, and converted mean scores and standard deviations from other tools to the units of CAPS nightmare items using the formula developed by Ebrahim et al. (Ebrahim et al., 2014). Furthermore, we performed sensitivity analyses: (1) restricting the analyses only in studies with a low or moderate risk of bias; and (2) excluding the one study that utilized PCT, and separating placebo and waitlist as different nodes in the network graph.

    • A guidance was developed to identify participants with missing outcome data in randomized controlled trials

      2019, Journal of Clinical Epidemiology
      Citation Excerpt :

      Impact of missing data on effect estimates in systematic reviews (unpublished data); Guidance for handling missing data of dichotomous [6] and continuous outcomes in systematic reviews [7,28]; Grading of Recommendations Assessment, Development and Evaluation (GRADE) guidance for assessing risk of bias associated with missing data in a body of evidence [8].

    View all citing articles on Scopus

    Conflict of interest: There is no financial or material support, commercial interest, or involvement with an organization with a financial interest in the research materials by any of the authors.

    Funding: There were no sources of funding for this study. S.E. is supported by an MITACS Elevate and a SickKids Restracomp award. P.A-.C. is funded by a Miguel Servet investigator contract from the Instituto de Salud Carlos III (CP09/00137).

    View full text