Conference Object

Robustness of Different Estimators of the Inter-Study Variance in Random Effects Meta-Analyses: A Monte Carlo Simulation.

Author(s) / Creator(s)

Blázquez-Rincón, Desirée
Sánchez-Meca, Julio
Botella, Juan
Suero, Manuel

Abstract / Description

Background: Between-study variance allows to work easily with random effects models in meta-analysis, which contemplate more sources of variability than the fixed effect model, favoring the generalization of the meta-analytic results. Between-study variance is one of the most important parameters in random-effects meta-analyses, as it is needed to describe the parametric distribution of the effect under study and to estimate the mean effect. There are currently a wide variety of point estimators for the between-study variance and several ways to obtain its confidence interval. Both, point and confidence interval estimators, differ in the method of estimation (method of the moments, maximum likelihood, Bayesian, or non-parametric methods), and computation (iterative or analytical). Moreover, it is worth noting that confidence intervals differ in whether they need to be built from a point estimate or not. Previous studies have shown that choosing different estimators for the between-study variance may lead to different statistical conclusions. Early simulation works also show that these estimators present differences in bias, efficiency, and confidence interval coverage rate depending on variables such as the magnitude of the between-study variance, the number of studies, or the sample size of the primary studies included in the meta-analysis. However, random-effects model is based on the assumption that the parametric distribution of effect sizes is normally distributed, which is widely violated in Psychology, especially in reliability generalization meta-analyses where its effect size (reliability coefficient) is asymmetrically distributed. Objectives and Method: As we were unable to find literature on the performance of between-study variance estimators when the parametric distribution departs from normal, the present study uses the Monte Carlo simulation with the aim of: Compare the results in terms of bias, efficiency, confidence interval coverage rate and width of all the available estimators in non-normal contexts, and check whether the results found in previous theoretical and simulation studies are replicated. Results and Conclusions: Although not all the developed estimators have been already been analysed, we compared 14 point estimators and 7 confidence intervals. With respecto to point estimators, non-normal scenarios did not affect bias, but there was a trend in efficiency. The most unbiased and efficient point estimator was positive Rukhin Bayes. Concerning interval estimators, the only one associated with positive Rukhin Bayes is Wald-type, and its coverage was close to 95%. Moreover, non-normal scenarios did not affect the coverage probability, but not the interval width.

Keyword(s)

Between-study variance point/interval estimators Meta-analysis Simulation normality assumption

Persistent Identifier

Date of first publication

2021-05-21

Is part of

Research Synthesis & Big Data, 2021, online

Publisher

ZPID (Leibniz Institute for Psychology)

Citation

Blázquez-Rincón, D., Sánchez-Meca, J., Botella, J., & Suero, M. (2021). Robustness of Different Estimators of the Inter-Study Variance in Random Effects Meta-Analyses: A Monte Carlo Simulation. ZPID (Leibniz Institute for Psychology). https://doi.org/10.23668/PSYCHARCHIVES.4833
  • Author(s) / Creator(s)
    Blázquez-Rincón, Desirée
  • Author(s) / Creator(s)
    Sánchez-Meca, Julio
  • Author(s) / Creator(s)
    Botella, Juan
  • Author(s) / Creator(s)
    Suero, Manuel
  • PsychArchives acquisition timestamp
    2021-05-14T13:26:33Z
  • Made available on
    2021-05-14T13:26:33Z
  • Date of first publication
    2021-05-21
  • Abstract / Description
    Background: Between-study variance allows to work easily with random effects models in meta-analysis, which contemplate more sources of variability than the fixed effect model, favoring the generalization of the meta-analytic results. Between-study variance is one of the most important parameters in random-effects meta-analyses, as it is needed to describe the parametric distribution of the effect under study and to estimate the mean effect. There are currently a wide variety of point estimators for the between-study variance and several ways to obtain its confidence interval. Both, point and confidence interval estimators, differ in the method of estimation (method of the moments, maximum likelihood, Bayesian, or non-parametric methods), and computation (iterative or analytical). Moreover, it is worth noting that confidence intervals differ in whether they need to be built from a point estimate or not. Previous studies have shown that choosing different estimators for the between-study variance may lead to different statistical conclusions. Early simulation works also show that these estimators present differences in bias, efficiency, and confidence interval coverage rate depending on variables such as the magnitude of the between-study variance, the number of studies, or the sample size of the primary studies included in the meta-analysis. However, random-effects model is based on the assumption that the parametric distribution of effect sizes is normally distributed, which is widely violated in Psychology, especially in reliability generalization meta-analyses where its effect size (reliability coefficient) is asymmetrically distributed. Objectives and Method: As we were unable to find literature on the performance of between-study variance estimators when the parametric distribution departs from normal, the present study uses the Monte Carlo simulation with the aim of: Compare the results in terms of bias, efficiency, confidence interval coverage rate and width of all the available estimators in non-normal contexts, and check whether the results found in previous theoretical and simulation studies are replicated. Results and Conclusions: Although not all the developed estimators have been already been analysed, we compared 14 point estimators and 7 confidence intervals. With respecto to point estimators, non-normal scenarios did not affect bias, but there was a trend in efficiency. The most unbiased and efficient point estimator was positive Rukhin Bayes. Concerning interval estimators, the only one associated with positive Rukhin Bayes is Wald-type, and its coverage was close to 95%. Moreover, non-normal scenarios did not affect the coverage probability, but not the interval width.
    en
  • Publication status
    other
    en
  • Review status
    notReviewed
    en
  • Sponsorship
    Supported by the Ministry of Economy, Industry, and Competitiveness (Project PSI2016 – 77676-P) and the European Regional Development Fund.
    en
  • Citation
    Blázquez-Rincón, D., Sánchez-Meca, J., Botella, J., & Suero, M. (2021). Robustness of Different Estimators of the Inter-Study Variance in Random Effects Meta-Analyses: A Monte Carlo Simulation. ZPID (Leibniz Institute for Psychology). https://doi.org/10.23668/PSYCHARCHIVES.4833
    en
  • Persistent Identifier
    https://hdl.handle.net/20.500.12034/4270
  • Persistent Identifier
    https://doi.org/10.23668/psycharchives.4833
  • Language of content
    eng
  • Publisher
    ZPID (Leibniz Institute for Psychology)
    en
  • Is part of
    Research Synthesis & Big Data, 2021, online
    en
  • Keyword(s)
    Between-study variance
    en
  • Keyword(s)
    point/interval estimators
    en
  • Keyword(s)
    Meta-analysis
    en
  • Keyword(s)
    Simulation
    en
  • Keyword(s)
    normality assumption
    en
  • Dewey Decimal Classification number(s)
    150
  • Title
    Robustness of Different Estimators of the Inter-Study Variance in Random Effects Meta-Analyses: A Monte Carlo Simulation.
    en
  • DRO type
    conferenceObject
    en
  • Leibniz subject classification
    Mathematik
    de_DE
  • Leibniz subject classification
    Psychologie
    de_DE
  • Visible tag(s)
    ZPID Conferences and Workshops