Intended for healthcare professionals

CCBYNC Open access
Research

The methodological quality of individual participant data meta-analysis on intervention effects: systematic review

BMJ 2021; 373 doi: https://doi.org/10.1136/bmj.n736 (Published 19 April 2021) Cite this as: BMJ 2021;373:n736
  1. Huan Wang, MSc student1,
  2. Yancong Chen, MSc student1,
  3. Yali Lin, MSc student1,
  4. Julius Abesig, MPH student1,
  5. Irene XY Wu, professor1,
  6. Wilson Tam, associate professor2
  1. 1Xiangya School of Public Health, Central South University, 5/F, Xiangya School of Public Health, No. 238, Shang ma Yuan ling Alley, Kaifu district, Changsha, Hunan, China
  2. 2Alice Lee Centre for Nursing Studies, National University of Singapore, Singapore
  1. Correspondence to: I X Y Wu irenexywu{at}csu.edu.cn
  • Accepted 9 March 2021

Abstract

Objective To assess the methodological quality of individual participant data (IPD) meta-analysis and to identify areas for improvement.

Design Systematic review.

Data sources Medline, Embase, and Cochrane Database of Systematic Reviews.

Eligibility criteria for selecting studies Systematic reviews with IPD meta-analyses of randomised controlled trials on intervention effects published in English.

Results 323 IPD meta-analyses covering 21 clinical areas and published between 1991 and 2019 were included: 270 (84%) were non-Cochrane reviews and 269 (84%) were published in journals with a high impact factor (top quarter). The IPD meta-analyses showed low compliance in using a satisfactory technique to assess the risk of bias of the included randomised controlled trials (43%, 95% confidence interval 38% to 48%), accounting for risk of bias when interpreting results (40%, 34% to 45%), providing a list of excluded studies with justifications (32%, 27% to 37%), establishing an a priori protocol (31%, 26% to 36%), prespecifying methods for assessing both the overall effects (44%, 39% to 50%) and the participant-intervention interactions (31%, 26% to 36%), assessing and considering the potential of publication bias (31%, 26% to 36%), and conducting a comprehensive literature search (19%, 15% to 23%). Up to 126 (39%) IPD meta-analyses failed to obtain IPD from 90% or more of eligible participants or trials, among which only 60 (48%) provided reasons and 21 (17%) undertook certain strategies to account for the unavailable IPD.

Conclusions The methodological quality of IPD meta-analyses is unsatisfactory. Future IPD meta-analyses need to establish an a priori protocol with prespecified data syntheses plan, comprehensively search the literature, critically appraise included randomised controlled trials with appropriate technique, account for risk of bias during data analyses and interpretation, and account for unavailable IPD.

Introduction

Well conducted systematic reviews with meta-analysis of randomised controlled trials are considered to be the best source of evidence on intervention effects.1 Meta-analysis is generally done by collecting aggregate data from publications or investigators.1 The aggregated data provide an average estimation of the intervention effect (eg, risk ratio or mean difference) in a group of patients with average characteristics (eg, diabetes diagnosis, mean age 50 years, 40% women).2 This might limit the exploration of potential intervention-covariate interactions. Moreover, the validity of aggregate data meta-analysis is affected by the reporting quality of the randomised controlled trials and inconsistent definition of the outcomes across included trials.1 By collecting original data from the eligible primary studies, an individual participant data (IPD) meta-analysis has the ability to collect both published and unpublished data, derive standardised outcome definitions, use a consistent unit of analysis across included randomised controlled trials, and assess interactions between interventions and participants’ characteristics. These advantages have led to the IPD meta-analysis being regarded as the ideal approach for providing evidence on intervention effect estimation.34 IPD meta-analysis has shown substantial impact on clinical practice and research by informing the development of guidelines and design of randomised controlled trials.56 The number of yearly published IPD meta-analyses has increased over time, from eight in 1994 to 88 in 2014.4 These numbers, however, only refer to those IPD meta-analyses that were incorporated into systematic reviews—even larger numbers were published each year when IPD meta-analyses without systematic reviews were considered.4

The results from systematic reviews of meta-analysis, based on either aggregate data or IPD, are, however, not free from bias.78 Empirical evidence has indicated flaws when aggregate data meta-analysis are used for various medical conditions.9101112 Commonly reported problems include lack of a predefined protocol, comprehensive literature search, list of excluded studies with justifications, and check of funding information for the included studies.9101112 These in turn might threaten the validity of the evidence derived from systematic reviews. For instance, pre-established protocols and lists of excluded studies with justifications will prevent the exclusion of studies with unfavourable findings.17 It is, however, more difficult to design and conduct an IPD meta-analysis than aggregate data meta-analysis, and bias could affect the validity of the results.8 Published IPD meta-analyses have shown evidence of inconsistencies in methods used to estimate intervention effects (eg, the one stage method involving simultaneous analysis of IPD retrieved from eligible studies, and the two stage method, when IPD are first analysed separately for each study and then combined using a traditional meta-analysis method1314), how participant level covariates are assessed (eg, by participant subgroups, by trial subgroups, or using meta-regression), and whether trial variation was accounted for when combining IPD (eg, some IPD meta-analyses treated IPD from different trials as a mega-trial).1516 Such discrepancies suggest that evidence users such as researchers, clinicians, guideline developers, and policy makers should perform critical appraisal before applying evidence from IPD meta-analyses. Although IPD meta-analysis is a well established approach for synthesising evidence and having a direct impact on guideline development, methodological quality is still unclear.

In their paper, Tierney and colleagues offered guidance to evidence users on how to critically appraise the scientific rigour of IPD meta-analyses.2 They proposed eight key questions (composed of 31 signalling questions): four applied to IPD meta-analyses (questions 3, 4, 7, and 8) and the remainder to systematic reviews (questions 1, 2, 5, and 6). The eight key questions did not, however, cover some important methodological quality related components of systematic reviews, such as conflicts of interest and risk of bias of included studies.2 AMSTAR-2 (A MeaSurement Tool to Assess systematic Reviews-2) is a well developed, validated, and widely used tool for assessing the methodological quality of systematic reviews.7 It has been used critically appraise systematic reviews of various medical conditions.9171819 AMSTAR-2 covers general methodological components of systematic reviews but has no specific item for assessing the unique methodological components of IPD meta-analysis. It has not been used to assess the methodological quality of IPD meta-analyses. We conducted a systematic review to describe the characteristics of an up-to-date sample of IPD meta-analyses, assess the methodological quality of the sampled IPD meta-analyses, and suggest areas for improvement in future IPD meta-analyses.

Methods

Eligibility criteria

An IPD meta-analysis was considered eligible for our study if it was included in a systematic review published before September 2019 and in English. We developed a practical criterion based on the definition of systematic review adopted in the Cochrane handbook.1 A systematic review had to provide eligibility criteria for study inclusion and conduct systematic literature searches in at least two databases. IPD meta-analyses of randomised controlled trials (considered the best source of evidence on intervention effect estimation) were considered eligible regardless of clinical areas studied.2021 To be considered as an IPD meta-analysis, data should have been obtained for quantitative synthesis either from the authors of randomised controlled trials or through other strategies such as data extraction from published trials.

We excluded IPD meta-analyses that summarised evidence from non-randomised controlled trials, quasi-randomised controlled trials, observational studies, diagnostic studies, prognostic studies, and economic evaluations, along with publications that focused on methodological issues with IPD, conference abstracts, and protocols. When an IPD meta-analysis was duplicated (ie, published in different journals) or one or more versions of the same IPD meta-analysis existed, we selected the most recent version; the others were used as supplementary documents for data extraction and critical appraisal.

Literature search

Using keywords related to IPD, we searched Medline, Embase, and the Cochrane Database of Systematic Reviews from inception to 30 August 2019. Specialised search filters for systematic reviews were adopted in Medline and Embase using the Ovid platform.2223 Our search strategies were based on a recent publication by Nevitt and colleagues, and we also extracted and screened citations included in that research.4 Appendix 1 provides details of our literature search strategies.

Literature selection and data extraction

Citations retrieved from both databases and Nevitt and colleagues’ paper4 were screened and selected according to the eligibility criteria. We used a predeveloped and piloted data extraction form (see supplementary appendix 2) to retrieve data on basic characteristics (eg, year of publication) and other information such as IPD retrieval rate from each included IPD meta-analysis. Two researchers (HW, YL, JA, and YC) independently selected the literature and extracted data. Discrepancies were resolved by discussion and consensus or by referring to the original publications.

Methodological quality assessment

We are unaware of a specific tool for assessing the methodological quality of IPD meta-analyses on intervention effects. As such, we synthesised items from the two widely accepted criteria: AMSTAR-2 and Tierney and colleagues’ guidance.27 When two criteria overlapped, we adopted the AMSTAR-2 item if it captured the methodological components of IPD meta-analyses; otherwise we chose the item from Tierney and colleagues’ guidance. All the non-overlapping items from AMSTAR-2 or Tierney and colleagues’ guidance were included as they captured either the general methodological components of a systematic review or the specific methodological components of IPD meta-analysis. We also referred to other publications on methodological quality of IPD meta-analyses.242526 A total of 22 items were included, with 15 adopted from AMSTAR-2, among these, six were considered as critical (table 1).

Table 1

Comparison of AMSTAR-2 and Tierney and colleagues’ criteria for assessing the methodological quality of IPD meta-analyses and criteria used in the current study

View this table:

Supplementary appendix 3 provides detailed operational guidelines that were adopted from AMSTAR-2, Tierney and colleagues’ guidance, or consensus among coauthors based on related publications.242526 We tested the operational guidelines with a random sample of five IPD meta-analyses and revised accordingly. Two trained researchers (HW and YL) independently conducted the critical appraisal process. Disagreements were resolved by discussions and consensus. When agreement could not be reached, a senior researcher (IXYW) was consulted.

Data analysis

All collected data, including general information about the IPD meta-analyses and results of methodological quality assessments, are summarised descriptively. Basic characteristics of the IPD meta-analyses, detailed information, and critical appraisal results are presented as percentages with corresponding 95% confidence intervals, or medians with interquartile ranges or ranges, as appropriate. Based on AMSTAR-2 and Jüni and colleagues’ recommendations, we summarised the results for methodological quality assessments according to each item without generating an overall score.727 Compliance with each item is presented by year of publication to show the trends in methodological quality of the sampled IPD meta-analyses. IBM Statistical Package for Social Sciences (SPSS) 25 (IBM, Armonk, NY) was used for all data analyses.

Patient and public involvement

No patients were involved in conceiving the research question, choosing the outcome measures, or designing and implementing the study because of insufficient training, covid-19 related restrictions, and time constraints.

Results

A total of 15101 records were identified through database searches and reference lists (fig 1). Of these, 2197 remained after screening of the titles and abstracts, of which 1874 were excluded during full text assessment. The top three reasons for exclusion were not a systematic review (n=911), IPD meta-analyses included non-randomised controlled trials (n=369), and a conference abstract (n=295). Up to 323 (see supplementary appendix 4) IPD meta-analyses met the eligibility criteria and were included in this study.

Fig 1
Fig 1

Screening and selection process of individual participant data (IPD) meta-analyses. RCT=randomised controlled trials

Basic characteristics of IPD meta-analyses

The 323 IPD meta-analyses were published between 1991 and 2019 (median 2014; table 2). These covered 21 clinical areas according to ICD-11 (international classification of diseases, 11th revision) criteria (see supplementary appendix 5). The most studied conditions were neoplasia (n=67, 21%), diseases of the circulatory system (n=64, 20%), mental and behavioural or neurodevelopmental disorders (n=31, 10%), and diseases of the nervous system (n=26, 8%). Most of the sampled IPD meta-analyses were non-Cochrane reviews (n=270, 84%), published in the top quarter of high ranking impact factor journals (n=269, 84%), carried out by collaborative groups (n=281, 87%), and included a corresponding author from Europe (n=246, 76%). Among the 219 (68%) IPD meta-analyses with funding support, 155 (71%) were from Europe (table 2). The 323 IPD meta-analyses summarised evidence for drug interventions (n=199, 62%), non-drug interventions (n=112, 35%) (see supplementary appendix 6), or both (n=12, 4%).

Table 2

Characteristics of 323 included individual participant data (IPD) meta-analyses*

View this table:

Performing and reporting of IPD meta-analyses

All IPD meta-analyses searched English databases, whereas only 17 (5%) searched non-English databases. The most popular methods for pooling data were a two stage approach (n=144, 45%), both one and two stage approach (n=96, 30%), and one stage approach (n=75, 24%). Three (1%) IPD meta-analyses combined data as a “mega” trial (table 3). Supplementary appendix 7 provides details of the methods used for IPD meta-analyses. Only around half (n=174, 54%) of the IPD meta-analyses reported on harms related to interventions, with more IPD meta-analyses on drug interventions (n=123, 62%) reporting harms than IPD meta-analyses on non-drug interventions (n=41, 37%) (table 2). Among the 310 IPD meta-analyses published in or after 2000 (a year after the publication of QUOROM (Quality Of Reporting Of Meta-analyses, the first reporting guideline for systematic reviews), only 91 (29%) mentioned following any reporting guidelines (table 3).

Table 3

Details on performing and reporting of the 323 included individual participant data (IPD) meta-analyses*

View this table:

IPD retrieval rate

A median of 11 (range 2-287) randomised controlled trials were included in systematic reviews, whereas the IPD were obtained from a median of seven (range 2-287) trials, with a median proportion of 81% (range 8-100%) IPD obtained from included randomised controlled trials. A total of 90 (31%) IPD meta-analyses obtained 100% IPD from the included randomised controlled trials, 67 (23%), 98 (34%), and 37 (13%) IPD meta-analyses obtained 80-99%, 50-79%, and less than 50% IPD from the included randomised controlled trials, respectively. Among 230 IPD meta-analyses providing information on IPD retrieval based on participants level, 79 (34%) obtained 100% IPD from all eligible participants, 81 (35%), 50 (22%), and 20 (9%) IPD meta-analyses obtained 80-99%, 50-79%, and less than 50% IPD from all eligible participants, respectively (table 3).

Methodological quality

The methodological quality of the 323 sampled IPD meta-analyses was generally unsatisfactory either on general items for systematic reviews (table 4) or on items specific to IPD meta-analyses (table 5). However, improvements were seen over time in most of the methodological items, especially in pre-establishing protocol and data analysis plan, accounting for risk of bias and publication bias (fig 2 and supplementary appendix 8).

Table 4

Results on general methodological items of the sampled 323 individual participant data (IPD) meta-analyses

View this table:
Table 5

Results on specific methodological items of the sampled 323 individual participant data (IPD) meta-analyses

View this table:
Fig 2
Fig 2

The methodological quality on six selected items of the 323 sampled individual participant data meta-analyses over time. Item 2: Did the report of the review contain an explicit statement that the review methods were established before conduct of the review and did the report justify any significant deviations from the protocol? Item 9-1: Did the review authors use a satisfactory technique for assessing the risk of bias in individual studies that were included in the review? Item 11-2: Was the choice of one stage or two stage analysis specified in advance or results for both approaches provided? Item 12: If meta-analysis was performed, did the review authors assess the potential impact of risk of bias in individual studies on the results of the meta-analysis or other evidence synthesis? Item 13: Did the review authors account for risk of bias in primary studies when interpreting or discussing the results of the review? Item 15: If they performed quantitative synthesis did the review authors carry out an adequate investigation of publication bias (small study bias) and discuss its likely impact on the results of the review?

Critical appraisal results on general items

Table 4 provides results for the critical appraisal on general items. The sampled IPD meta-analyses showed more than 80% compliance in three items—stating conflict of interests (92%, 95% confidence interval 89% to 95%), including PICO (population, intervention, comparator, and outcome) components in the research question and inclusion criteria (85%, 81% to 89%), and explaining the observed heterogeneity (81%, 77% to 85%). None of the items are, however, critical ones in AMSTAR-2.

The sampled IPD meta-analyses showed unsatisfactory performance for the six critical items in AMSTAR-2 that were applicable to IPD meta-analyses. Only 43% (38% to 48%) of IPD meta-analyses used a satisfactory technique for assessing the risk of bias of included randomised controlled trials (table 4). Ninety seven IPD meta-analyses (30%, 25% to 35%) did not perform any critical appraisal of the included randomised controlled trials, and 56 (17%, 13% to 22%) did not report the tool they used for critical appraisal (table 3). The sampled IPD meta-analyses showed no more than 40% compliance for the remaining five critical items—accounting for risk of bias when interpreting results (40%, 34% to 45%), providing a list of excluded studies with justifications (32%, 27% to 37%), establishing an a priori protocol and justifying any deviations (31%, 26% to 36%), assessing and considering the potential of publication bias (31%, 26% to 36%), and conducting a comprehensive literature search (19%, 15% to 23%; table 4).

The remaining five non-critical items in AMSTAR-2 that were applicable to IPD meta-analyses showed less than 50% compliance for the sampled IPD meta-analyses. Two items had no more than 20% of sampled IPD meta-analyses rated as yes: explaining the selection of the study design (10%, 7% to 14%) and reporting sources of funding for the included randomised controlled trials (18%, 14% to 22%).

Critical appraisal results on items specific to IPD meta-analyses

Except for stratifying or accounting for clustering of participants within trials (98%, 96% to 99%) and using appropriated methods to assess whether effects of interventions varied by participant characteristics (71%, 66% to 76%), the performance of the sampled IPD meta-analyses on specific items were generally unsatisfactory, with less than 60% showing compliance with each item. A relatively low proportion of the sampled IPD meta-analyses prespecified methods either for assessing the overall effects (44%, 39% to 50%) or for assessing participant-intervention interactions (31%, 26% to 36%). Furthermore, only 48% (42% to 53%) of the sampled IPD meta-analyses labelled the analyses as prespecified or post hoc, and 5% (3% to 7%) of exploratory analyses were not labelled as such. The remaining 47% (42% to 53%) did not provide related information (table 5).

Up to 126 (39%, 34% to 44%) IPD meta-analyses failed to obtain IPD from 90% or more of eligible participants or trials, or both. Among them, only 60 (48%, 39% to 56%) provided reasons for not obtaining IPD, and 21 (17%, 10% to 23%) undertook certain strategies to account for the unavailable IPD. Only 56% (50% to 61%) of IPD meta-analyses checked for missing, invalid, out of range, or inconsistent items, and only 55% (50% to 61%) contacted trial authors for clarifications. Among the 127 IPD meta-analyses published after 2015, only 41 (32%, 24% to 40%) reported that the PRISMA-IPD (Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Individual Participants Data extension) was followed (table 5).

Discussion

This study identified and appraised 323 IPD meta-analyses of randomised controlled trials that focused on intervention effects. This sample of IPD meta-analyses (to August 2019) covered 21 different clinical areas and comprised Cochrane and non-Cochrane reviews as well as drug and non-drug interventions. According to our criteria, the methodological quality of the sampled IPD meta-analyses was far from satisfactory. In future IPD meta-analyses much consideration is needed of both general methodological components of systematic reviews (eg, establishing an a priori protocol, using a comprehensive literature search strategy, assessing the risk of bias of included randomised controlled trials with a satisfactory approach as well as accounting for risk of bias when interpreting results, and addressing potential publication bias) and IPD meta-analyses specific components (eg, prespecifying methods used for data analyses and putting a label for exploratory analyses, providing reasons for not obtaining IPD and taking strategies to account for unavailable IPD, checking data integrity, and clarifying uncertainties in need). Future IPD meta-analyses should also justify the study design for inclusion and report funding information for the included randomised controlled trials, as less than 20% of the sampled IPD meta-analyses complied with these two items.

A priori protocol and exploratory analyses

A predeveloped protocol will help increase objectivity and reduce bias in systematic reviews.728 A prespecified protocol with a detailed plan for data analyses is of importance to an IPD meta-analysis, because raw data collected from randomised controlled trials enables reviewers to perform many analyses and this poses the risk of data being repeatedly interrogated until desired results are obtained.1 However, only 31% of the sampled IPD meta-analyses established an a priori protocol and justified important deviations from the protocol. Conducting exploratory analyses to identify potential effect modifiers, at either the trial or the participant level is, however, a recognised advantage of IPD meta-analyses.1 Exploratory analysis therefore is not prohibited in IPD meta-analyses, and it enables the collection of extra information on certain subgroups of participants, who might benefit more from the intervention and contributes to better clinical decision making. Nonetheless, appropriate interpretation of the results from IPD meta-analyses requires full presentation of all the analyses, with exploratory analyses being labelled as such. It has been suggested that future IPD meta-analyses should predevelop the research protocol, register it in PROSPERO or the Cochrane Library, and label exploratory analyses as such.2262930

Literature searches and publication bias

The importance of a comprehensive literature search is well established in systematic reviews.1 Theoretically, IPD meta-analyses have advantages of comprehensively identifying literature, especially unpublished randomised controlled trials through collaboration with multiple research groups and consultation with trialists.1 However, we found that only 19% of our sampled IPD meta-analyses fulfilled the revised criterion in AMSTAR-2 for a comprehensive literature search. Evidence suggests that studies with positive results had a higher probability of getting published in English journals31 and that only 5% of IPD meta-analyses searched non-English databases and 39% considered non-English publications in their eligibility criteria. IPD meta-analyses therefore might not identify a representative sample of randomised controlled trials.

Although the impact of a non-representative sample of randomised controlled trials on the effect of IPD meta-analyses is unpredictable and depends on the research topic, publication bias can potentially affect the results and conclusions.8 More than two thirds of the sampled IPD meta-analyses in our study did not fulfil the methodological item related to investigation of publication bias. Future researchers should focus on reducing the risk of publication bias through a comprehensive literature search and dealing with its potential impact on the results of IPD meta-analyses.

Risk of bias of included trials

One recognised advantage of IPD meta-analyses is being able to contact trial investigators when assessing risk of bias to determine the validity of the results. Nonetheless, only 43% of the sampled IPD meta-analyses used a satisfactory technique to assess the risk of bias in included randomised controlled trials.7

The reliability of the evidence derived from an IPD meta-analysis depends on the validity of the included primary studies.3233 Therefore, accounting for risk of bias of included trials during data synthesis, and discussing its potential impact when interpreting the results will help evidence users to judge the confidence of the findings. Furthermore, stratified data synthesis based on risk of bias will provide results from trials exclusively with low risk of bias, which will also facilitate evidence based decision making. However, the sampled IPD meta-analyses in our study performed unsatisfactorily when assessing risk of bias and accounting for it during data synthesis and interpretation of results. Future researchers should deal with these methodological shortcomings when conducting IPD meta-analyses. The updated Cochrane risk of bias tool (RoB2) can be an optimal choice for critical appraisal of included randomised controlled trials.1

IPD retrieval

Maximising IPD retrieval is generally considered to have the potential to reduce selection bias, and will likely provide more reliable results.28 It is not always easy to obtain IPD from all eligible randomised controlled trials or participants; hence, 90% IPD retrieval, which was used as the cut-off for a large proportion of IPD retrieval, has been proposed as an acceptable target.34 Nonetheless, whether the unavailable IPD introduces bias—which might not only relate to the retrieval proportion but also to other factors, such as whether the unavailable IPD is associated with the direction of the effect estimation and whether sufficient power has been reached with the available IPD. These factors have also been considered according to the criteria used in this study. Although it is not always possible to obtain IPD from 90% or more of eligible participants or trials, authors can at least provide reasons for it and undertake strategies, such as combining aggregate data with IPD as sensitivity analyses and comparing the differences between trials that provide IPD and those that do not.24 Such performance was not, however, common among the sampled IPD meta-analyses that did not obtain IPD from 90% or more of eligible participants or trials. Future studies should address these methodological flaws.

Strengths and limitations of this review

This study has several strengths. Firstly, we assessed the methodological quality of IPD meta-analysis by including both general methodological items of systematic reviews and IPD meta-analysis specific methodological items. Secondly, we did not restrict the type of disease or intervention during the sampling process, and as such the sampled IPD meta-analyses covered a wide range of clinical areas and interventions. Thirdly, the performance of each individual methodological item was reported in detail to inform where improvement is required for future studies.

Several limitations are worth mentioning. Firstly, no IPD meta-analysis specific critical appraisal tool exists. In this study, we combined the criteria from AMSTAR-2, Tierney and colleagues’ guidance, and other related publications.2242526 This enabled us to capture the general methodological components of systematic reviews as well as those of IPD meta-analysis.35

Secondly, the operational guideline for the criteria we used in our study was developed by adopting rules from AMSTAR-2 as well as group discussion and consensus. We did not collect any external experts’ opinions, nor did we conduct a formal validation process. However, we provided detailed assessment rules for each methodological item to facilitate judgments and strictly followed them. An IPD meta-analysis specific version of the AMSTAR tool is needed in the future.

Thirdly, we only considered the execution of the methodological items, without further inspection of their actual achievements. For example, for the risk of bias assessment, we only assessed whether a satisfactory technique was used—such an approach to assess the risk of bias is not equal to assessing the risk of bias appropriately, which was beyond the scope of this study. Likewise, information on statistical methods used for IPD meta-analyses were solely based on the description of the publications. In our study we did not conduct further investigation on whether the statistical methods were applied correctly. Two of the studies indicated that the use of one stage methods were substandard among many IPD meta-analyses.1536 Further assessments are warranted on whether clustering was correctly accounted for when the one stage method was used, whether within trial interactions were appropriately separated from across trial interactions to reduce ecological bias when investigating effect modifiers, and whether model assumptions (eg, choice of random or fixed effects) were properly checked.13143738 In addition, no assessment of publication bias is not necessarily equal to the existence of publication bias. The authors of IPD meta-analyses are, however, asked to provide related information to facilitate evidence based decision making. Otherwise, a reassessment of publication bias is needed for evidence users.8

Fourthly, owing to limited resources and time, we only sampled and critically appraised IPD meta-analyses that included randomised controlled trials on intervention effects. Conclusions from this study might not apply to IPD meta-analyses including non-randomised controlled trials or IPD meta-analyses on diagnoses, prognoses, or causes of diseases. Further studies are needed to assess the methodological quality of IPD meta-analyses in these research areas.

Fifthly, a common drawback of this type of study is that the critical appraisal process relies solely on the reporting of the publications.9 Hence, some of the results might be a reflection of the reporting quality instead of methodological quality, especially for the early published IPD meta-analyses. The release of PRISMA-IPD might help improve reporting and facilitate the critical appraisal of future IPD meta-analyses.26 Assessing the reporting of the sampled IPD meta-analyses comprehensively is beyond the scope of this study, but it has been covered previously.39

Finally, we focused on methodological quality, which is distinguished from the quality of evidence derived from the IPD meta-analyses. The latter is not only affected by the methodological quality of the IPD meta-analyses, but also depends on the features of the primary studies, such as the risk of bias and precision of the effect estimation.20

Comparisons with similar studies

We did not identify any study that comprehensively assessed the methodological quality of IPD meta-analyses. Compared with the methodological quality of aggregate data meta-analyses, the sampled IPD meta-analyses showed better performance in synthesising data with an appropriate method, conducting a comprehensive literature search, and stating conflicts of interests of the review.9101112 However, compared with aggregate data meta-analyses, the sampled IPD meta-analyses showed lower compliance in conducting literature selection and data extraction in duplicate, providing adequate details about included randomised controlled trials, assessing the risk of bias with a satisfactory technique and accounting for it during data analyses and interpretation of results, and investigating and discussing publication bias.9101112 Discrepancy in the sampling time frame (the past 10 years for aggregated data meta-analyses versus no time restriction for IPD meta-analyses) could have contributed to the observed differences. Reporting might be another reason, as IPD meta-analyses may focus on presenting IPD meta-analyses related details given the word limits of traditional journals; hence, some of the lower compliance might be due to lack of reporting (eg, performance on literature selection and data extraction). The recently developed online appendices policy of many journals and the release of PRISMA-IPD can improve reporting and facilitate the critical appraisal of future IPD meta-analyses. This might contribute to the trends of improvements in the several items observed in this study.

Ahmed and colleagues evaluated publication bias, selection bias, and unavailable data in 31 IPD meta-analyses of randomised controlled trials published between 2007 and 2009.8 As with our study, they found similar unsatisfactory performance of comprehensive literature search (29% v 19%), unsatisfactory consideration of publication bias (32% v 31%), and a low proportion (52% v 51%) of meta-analyses collecting IPD from 90% or more of eligible participants or trials.8 We did, however, find trends of improvements in these items, although there is still room for improvement. Compared with the study by Ahmed and colleagues, our study used a much larger sample and comprehensively assessed the methodological quality of included IPD meta-analyses.8

Implications for clinical practice and future research

Compared with aggregate data meta-analysis, the distinguished features of IPD meta-analysis have made it ideal for systematic review.3 It also has a direct impact on healthcare practice and guideline development.5 However, results from this study, together with previous related studies, indicated that IPD meta-analysis might not necessarily be free from bias.81516 Therefore clinicians and guideline developers should assess the methodological quality of IPD meta-analyses before making use of the evidence. Researchers should follow the Cochrane Handbook as well as other guidelines for conducting and reporting IPD meta-analysis to ensure the quality of resulting IPD meta-analysis.122426 An extension specifically for IPD meta-analysis of AMSTAR-2 is needed.

Conclusions

The methodological quality of IPD meta-analyses is unsatisfactory, either on general items for systematic reviews or on items specific to IPD meta-analyses. Much effort is needed for future IPD meta-analysis in establishing an a priori protocol, prespecifying methods used for data analyses, labelling exploratory analyses, searching literature comprehensively, using an adequate approach to assess the risk of bias of included randomised controlled trials and accounting for risk of bias in data analyses and results interpretation, investigating and discussing publication bias, providing reasons for not obtaining IPD and taking strategies to account for the unavailable IPD, checking data integrity, and clarifying uncertainties. The Cochrane Handbook as well as other methodological guidelines and the PRISMA-IPD statement could be used as reference for future IPD meta-analyses.122426 It is suggested that the rigour of IPD meta-analyses should be assessed before the results are considered.

What is already known on this topic

  • Individual participant data (IPD) meta-analysis is regarded as the ideal approach for providing evidence on intervention effect estimation but is susceptible to bias from methodological flaws

  • Evidence has shown that the published IPD meta-analyses were conducted based on inconsistent standards

  • Regardless of increasing numbers of published IPD meta-analyses, methodological quality has not been comprehensively evaluated

What this study adds

  • This study found that the methodological quality of IPD meta-analyses was unsatisfactory

  • IPD meta-analyses showed poor performance in general methodological items, especially in assessing and accounting for risk of bias of included trials, establishing an a priori protocol, assessing and considering the potential of publication bias, and conducting a comprehensive literature search

  • IPD meta-analyses showed poor performance in IPD specific methodological items, especially in prespecifying methods for data synthesis, checking data integrity, and undertaking strategy to account for unavailable IPD

Ethics statements

Ethical approval

Not required.

Data availability statement

The full dataset is available for review and replication (see supplementary appendix 9).

Acknowledgments

We thank Editage for English language editing.

Footnotes

  • Contributors: IXYW and WT conceived and designed the research questions. IXYW conducted the literature search. HW and YC performed the literature selection. HW, YL, JA, and YC extracted data. HW and YL conducted the critical appraisal. HW analysed data. IXYW and HW interpreted the results. HW wrote the manuscript under the supervision of IW and WT. All authors had full access to all the data in the study and can take responsibility for the integrity of the data and the accuracy of the data analysis. IXYW is the guarantor. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

  • Funding: This research was supported by the High-level Talents Introduction Plan from Central South University (No 502045003) and the National Natural Science Foundation of China (No 81973709). The funders had no role in considering the study design or in the collection, analysis, interpretation of data, writing of the report, or decision to submit the article for publication.

  • Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare: support from supported by the High-level Talents Introduction Plan from Central South University and the National Natural Science Foundation of China for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.

  • The lead author (IXYW) affirms that the manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained.

  • Dissemination to participants and related patient and public communities: The authors plan to disseminate the paper widely to researchers, teachers, clinicians, policy makers, and any other related persons through national and international conferences, webinars, and social media, and by establishing an email discussion group.

  • Provenance and peer review: Not commissioned; externally peer reviewed.

http://creativecommons.org/licenses/by-nc/4.0/

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

References