Skip to main content
Log in

Is Omega Squared Less Biased? a Comparison of Three Major Effect Size Indices in One-Way Anova

  • Published:
Behaviormetrika Aims and scope Submit manuscript

Abstract

The purpose of this study is to find less biased effect size index in one-way analysis of variance (ANOVA) by performing a thorough Monte Carlo study with 1,000,000 replications per condition. Our results show that contrary to common belief, epsilon squared is the least biased among the threemajorindices, while omega squared produces the least root mean squared errors, for all conditions. Although eta squared results in the least standard deviation, this does not necessarily make it a good estimator because a considerable amount of bias still occurs when the sample size is small.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Alhija, F. N., & Levy, A. (2009) Effect size reporting practices in published articles. Educational and Psychological Measurement, 69, 245–265. doi: 10.1177/0013164408315266

    Article  MathSciNet  Google Scholar 

  • American Psychological Association. (2009). Publication manual of the American Psychological Association (6th ed.). Washington DC: American Psychological Association.

    Google Scholar 

  • Carroll, R. M., & Nordholm, L. A. (1975) Sampling characteristics of Kelly’s and Hays’. Educational and Psychological Measurement, 35, 541–554. doi: 10.1177/001316447503500304

    Article  Google Scholar 

  • Cohen, J. (1962) The statistical power of abnormal-social psychological research: A review. Joumal of Abnormal and Social Psychology, 65, 145–153. doi: 10.1037/h0045186

    Article  Google Scholar 

  • Cohen, J. (1969). Statistical Power Analysis for the Behavioral Sciences. New York: Academic Press.

    MATH  Google Scholar 

  • Cumming, G., & Finch, S. (2001) A primer on the understanding, use, and calculation of confidence intervals that are based on central and noncentral distributions. Educational and Psychological Measurement, 61, 532–574. doi: 10.1177/0013164401614002

    Article  MathSciNet  Google Scholar 

  • Darlington, R. B. (1968) Multiple regression in psychological research and practice. Psychological Bulletin, 69, 161–182. doi:10.1037/h0025471

    Article  Google Scholar 

  • Ezekiel, M. J. B. (1930). Methods of Correlational Analysis. New York: Wiley.

    MATH  Google Scholar 

  • Ferrenberg, A. M., Landau, D. P., & Wong, Y. J. (1992) Monte Carlo simulations: hidden errors from “good” random number generators. Physical Review Letters, 69, 3382–3384. doi:10.1103/PhysRevLett.69.3382

    Article  Google Scholar 

  • Fidler, F., & Thompson, B. (2001) Computing correct confidence intervals for ANOVA fixedandrandom-effects effect sizes. Educational and Psychological Measurement, 61, 575–604. doi:10.1177/0013164401614003

    MathSciNet  Google Scholar 

  • Fritz, C. O., Morris, P. E., & Richler, J. J. (2011) Effect size estimates: Current use, calculations, and interpretation. Journal of Experimental Psychology: General, 141, 2–18. doi:10.1037/a0024338

    Article  Google Scholar 

  • Gentle, J. E. (2003). Random number generation and Monte Carlo methods (2 ed). New York: Springer.

    MATH  Google Scholar 

  • Graham, J. M. (2008) The general linear model as structural equation modeling. Journal of Educational and Behavioral Statistics, 33, 485–506. doi: 10.3102/1076998607306151

    Article  Google Scholar 

  • Grissom, R. J., & Kim, J. J. (2004). Effect sizes for research: A broad practical approach. New York: Psychology Press.

    Google Scholar 

  • Hays, W. L. (1963). Statistics for psychologists. New York: Holt, Rinehart, and Winston.

    Google Scholar 

  • Kelley, T. L. (1935) An unbiased correlation ratio measure. Proceedings of the National Academy of Sciences, 21, 554–559.

    Article  Google Scholar 

  • Keppel, G. (1982). Design and analysis. Englewood Cliffs, NJ: Prentice Hall.

    Google Scholar 

  • Keselman, H. J. (1975) A Monte Carlo investigation of three estimates of treatment magnitude: Epsilon squared, eta squared, and omega squared. Canadian Psychological Review, 16, 44–48. doi: 10.1037/h0081789

    Article  Google Scholar 

  • Kirk, R. E. (2003) The importance of effect magnitude. In S. F. Davis (Ed.), Handbook of research methods in experimental psychology (83–105). Oxford, UK: Blackwell.

    Google Scholar 

  • Kline, R. B. (2004). Beyond significance testing. Washington, DC: American Psychological Association.

    Google Scholar 

  • Matsumoto, D., Kim, J. J., & Grissom, R. J. (2011) Effect sizes in cross-cultural research. In D. Matsumoto & F. J. R. Van de Vijver (Eds.), Cross-cultural research methods in psychology. New York: Cambridge University Press.

    Google Scholar 

  • Matsumoto, M., & Nishimura, T. (1998) Mersenne twister: a 623-dimensionally equidistributed uniform pseudorandom number generator. ACM Transactions on Modeling and Computer Simulation, 8, 3–30. doi: 10.1145/272991.272995

    Article  Google Scholar 

  • Maxwell, S. E., Camp, J. C., & Arvey, R. D. (1981) Measures of strength of association: A comparative examination. Journal of Applied Psychology, 66, 525–534. doi: 10.1037/0021-9010.66.5.525

    Article  Google Scholar 

  • Maxwell, S. E., & Delaney, H. D. (2004). Designing experiments and analyzing data: a model comparison perspective (2 ed.). Mahwah, NJ: Lawrence Erlbaum Associates.

    MATH  Google Scholar 

  • Natesan, P., & Thompson, B. (2007) Extending improvement-over-chance I-index effect size simulation studies to cover some small-sample cases. Educational and Psychological Mea surement, 67, 59–72. doi: 10.1177/0013164406292028

    Article  MathSciNet  Google Scholar 

  • Olejnik, S., & Algina, J. (2000) Measures of effect size for comparative studies: applications, interpretations, and limitations. Contemporary Educational Psychology, 25, 241–286. doi:10.1006/ceps.2000.1040

    Article  Google Scholar 

  • Pierce, C. A., Block, R. A. & Aguinis, H. (2004) Cautionary note on reporting eta-squared values from multifactor ANOVA designs. Educational and Psychological Measurement, 64, 916–924.

    Article  MathSciNet  Google Scholar 

  • R Foundation for Statistical Computing. (2011) R: A Language and Environment for Statistical Computing. Available from http://www.R-project.org/

    Google Scholar 

  • Schucany, W. R., Gray, H. L., & Owen, D. B. (1971) On bias reduction in estimation. Journal of the American Statistical Association, 66, 524–533.

    Article  Google Scholar 

  • Snyder, P., & Lawson, S. (1993) Evaluating results using corrected and uncorrected effect size estimates. Journal of Experimental Education, 61, 334–349.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kensuke Okada.

Additional information

This work was supported in part by grants from the Japan Society for the Promotion of Science (24730544, 090100000119, 23300310) and a grant of Strategic Research Foundation Grant-aided Project for Private Universities from MEXT Japan (2011-2015 S1101013).

About this article

Cite this article

Okada, K. Is Omega Squared Less Biased? a Comparison of Three Major Effect Size Indices in One-Way Anova. Behaviormetrika 40, 129–147 (2013). https://doi.org/10.2333/bhmk.40.129

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.2333/bhmk.40.129

Key Words and Phrases

Navigation