Replications of forecasting research

https://doi.org/10.1016/j.ijforecast.2009.09.003Get rights and content

Abstract

We have examined the frequency of replications published in the two leading forecasting journals, the International Journal of Forecasting (IJF) and the Journal of Forecasting (JoF). Replications in the IJF and JoF between 1996 and 2008 comprised 8.4% of the empirical papers. Various other areas of management science have values ranging from 2.2% in the Journal of Marketing Research to 18.1% in the American Economic Review. We also found that 35.3% of the replications in forecasting journals provided full support for the findings of the initial study, 45.1% provided partial support, and 19.6% provided no support. Given the importance of replications, we recommend various steps to encourage replications, such as requiring a full disclosure of the methods and data used for all published papers, and inviting researchers to replicate specific important papers.

Introduction

Gardner and Diaz-Saiz (2008) replicated and extended research performed by Fildes, Hibon, Makridakis, and Meade (1998), which was itself an extension of the M-Competition study (Makridakis et al., 1982). Using changes in the estimation procedures, they concluded that the primary conclusion was supported but disagreed with one of the secondary conclusions. This demonstrates the value of replications, both in showing where we can gain confidence and in indicating areas which are in need of further research.

Experts have claimed that replication is vital to scientific progress (e.g. Hunter, 2001). Replications help to ensure that findings can be reproduced. Extensions go beyond that, and examine whether the findings can be generalized.

However, despite these benefits, relatively few of the papers published in various areas of management science are replications (Hubbard and Vetter, 1996, Evanschitzky et al., 2007). A number of reasons for this have been suggested. First, and perhaps most important, many studies in management science are unimportant, and thus it would be senseless to replicate them. Second, authors seldom provide sufficient detail in the paper (or in response to requests) to allow for replication. And, third, reviewers seem to be biased against replications, either because they think that they do not offer anything new or because the results are not statistically significant.

The misinterpretation of null hypothesis testing procedures may also have undermined the perceived need for replication. Oakes (1986) showed that 42 out of 70 (60%) experienced academic psychologists falsely believed that an experimental outcome that is significant at the 0.01 level has a 0.99 probability of being statistically significant if the study were replicated. This is not true, and shows the low level of knowledge among some academics, which is possibly leading to the erroneous conclusion that replications are not needed.

Based on these observations, we examine the state of replication research in forecasting and then suggest ways to make further improvements with respect to replication.

Section snippets

The record of replications in the leading forecasting journals

The definitions of the central terms in this study are extensions of those employed by Hubbard and Armstrong (1994, p. 236). A replication is defined as “a duplication of a previously published empirical study that is concerned with assessing whether similar findings can be obtained upon repeating the study”. Likewise, a replication with an extension is “a duplication of a previously published empirical research project that serves to investigate the ability to generalize earlier research

Discussion

Given the relatively small number of replications across all disciplines of management science, in conjunction with the majority of replication attempts failing to provide support for the initial findings, we call for an increase in published replications of important papers. To aid in this increase, the data and methods used in the original studies should be made available on the Internet, concurrently with a paper’s publication. This is important because over time authors often lose track of

Conclusions

In comparison with the empirical studies published in other areas of management sciences, the replication rate in the International Journal of Forecasting (10.4% of empirical papers published) is well above the median of 6.6%, while the Journal of Forecasting replication rate (6.3%) is slightly below the median. It is difficult to say what the optimum number of replication studies that should be published is. However, given that many replications do not support the original findings (about 1/5

Acknowledgments

We thank all those authors who replied to our request to ensure that they were accurately cited. In addition, Nils Petter Gleditsch, Raymond Hubbard, Bruce D. McCullough and the reviewers provided useful suggestions.

References (14)

There are more references available in the full text version of this article.

Cited by (32)

  • Changing research culture toward more use of replication research: a narrative review of barriers and strategies

    2021, Journal of Clinical Epidemiology
    Citation Excerpt :

    Editors should also be trained to have the required skills to review replications and recognize their importance for knowledge development [12,15,66]. Some authors suggested that journals could appoint a replication editor or an editorial board devoted to replication [7,42,66]. Still others stated that it is also important to promote use of adequate reporting guidelines and submission of detailed appendices including experimental protocol, instructions to subjects, measurement tools, and so on [28,49,52,78].

  • Replication Research Series-Paper 1 : A concept analysis and meta-narrative review established a comprehensive theoretical definition of replication research to improve its use

    2021, Journal of Clinical Epidemiology
    Citation Excerpt :

    Replication can also refer to a study that only has a certain level of similarity with the original experiment. In this case, researchers purposively make minor or important changes to the study to evaluate possible generalization and extension of previous research findings [1,2,12,31,33,34,35,43,46,47,53–58]. Examples would be changes in either the manipulated or measured variables, investigating the influence of additional variables, repeating the study using different populations, contexts, geographical areas, time periods, or using any combination of the aforementioned changes [59].

  • A comparison of brand loyalty between on the go and take-home consumption purchases

    2020, Journal of Retailing and Consumer Services
    Citation Excerpt :

    The primary direction for future research is replication of the findings, preferably in another country, and with other or a greater variety of product categories. Prominent researchers have lamented the lack of replication in marketing for decades (Hubbard and Armstrong, 1994; Evanschitzky and Armstrong, 2010; Uncles, 2011; Lehmann and Bengart, 2016). Hopefully this research area is one in which replications can be conducted.

  • Skating on thin evidence: Implications for public policy

    2018, European Journal of Political Economy
View all citing articles on Scopus

Invited paper.

View full text