Who herds?

https://doi.org/10.1016/j.jfineco.2005.07.006Get rights and content

Abstract

This paper develops a test for herding in forecasts by professional financial analysts that is robust to (a) correlated information amongst analysts; (b) common unforecasted industry-wide earnings shocks; (c) information arrival over the forecasting cycle; (d) the possibility that the earnings that analysts forecast differ from what the econometrician observes; and (e) systematic optimism or pessimism among analysts. We find that forecasts are biased, but that analysts do not herd. Instead, analysts “anti-herd”: Analysts systematically issue biased contrarian forecasts that overshoot the publicly-available consensus forecast in the direction of their private information.

Introduction

Both the financial press and academic research regularly suggest that security analysts herd toward the consensus forecast, issuing forecasts that underweight their own information (see, e.g., Trueman, 1990, Trueman, 1994, Hong et al., 2000). These herding stories have arisen because forecasts often seem “too clustered.” For example, Gallo et al. (2002) find that forecasts of gross domestic product (GDP) converge as the date at which GDP is announced draws nearer, but that invariably final forecasts are either uniformly too low or too high. The authors conclude that forecasters herd.

But, clustered forecasts need not imply that analysts herd. First, earlier forecasts may contain valuable information that subsequent analysts incorporate into their forecasts (Welch, 2000). Second, analysts rely on common information sources such as a company's chief financial officer (CFO) for information. If a CFO tells each analyst the same thing, their forecasts will reflect this common (perhaps mis)information, so that their forecasts will all tend to be too high or too low relative to realized earnings. Third, common unanticipated market-wide earnings shocks can cause most, if not all, forecasts to be too low or too high relative to earnings. For instance, the fact that during the 1990-1991 recession most forecasts exceeded earnings does not imply that forecasts were upward-biased; instead, the earnings fall may have been unanticipated by the market. Fourth, the measure of earnings that analysts forecast may differ from the earnings that the econometrician sees (Keane and Runkle, 1998). For example, analysts may not seek to forecast exceptional items that appear in reported earnings. Fifth, analysts may be systematically optimistic or pessimistic,1 so that forecasts either tend to exceed or fall short of the consensus, again creating the appearance of herding (see Richardson et al., 2004).

This paper develops tests for herding in the earnings forecasts issued by professional analysts that are robust to these concerns. In environments with information arrival, an unbiased analyst combines all information at his disposal and updates to obtain a posterior distribution over earnings. A forecast is unbiased if it corresponds to the analyst's best estimate of earnings given all available information, i.e., if it corresponds to the mean or median of the analyst's posterior distribution over earnings. In its most basic form, herding amounts to biasing a forecast away from an analyst's best estimate, toward the consensus forecast of earlier analysts; while anti-herding amounts to biasing a forecast away from that consensus.2 Our tests look at the frequency with which these biases occur.

The key insight underlying our tests is simple. For the benchmark case of an unbiased analyst, his forecast equals the median of his posterior of earnings given all information at his disposal. It thus follows that the analyst's forecast should be as likely to exceed realized earnings as to fall short, both unconditionally, and conditional on anything in his information set, including the consensus forecast of analysts who have reported earlier. If, instead, an analyst herds, biasing his forecast toward the extant consensus, then his forecast will be located between the consensus and his best estimate of earnings. Hence, if an analyst herds and his forecast exceeds the consensus of earlier analysts, then it should fall short of realized earnings more than half of the time. So, too, when a herding analyst's forecast falls short of the consensus, it should exceed earnings more than half of the time. The opposite outcome is predicted if analysts anti-herd: An analyst who anti-herds issues a forecast that overshoots his best estimate of earnings in the direction away from the consensus.

The key feature that these observations share is that no assumptions are made about how an analyst forms his posterior. As a result, our tests for unbiasedness and herding are unaffected by signal correlation or information arrival. Essentially, we estimate two conditional probabilities: (a) the conditional probability that a forecast exceeds realized earnings given that the forecast exceeds the extant consensus forecast (and perhaps given other conditioning information), and (b) the conditional probability that a forecast falls short of earnings given that the forecast falls short of the extant consensus. Crucially, to control for possible unforecasted earnings shocks, our test statistic averages out these two conditional probabilities: Under the null of unbiasedness, an unforecasted earnings shock has offsetting impacts on the frequency with which each overshooting event occurs. So, too, systematic optimism or pessimism has offsetting effects on the two conditional probability estimates. Our analysis demonstrates the robustness of our test and the ease with which it can be taken to the data.

No matter where we look in our large data set of earnings forecasts by professional analysts, we find strong evidence against herding behavior. Our tests show that analysts systematically issue biased contrarian forecasts that overshoot the consensus forecast in the direction of their private information. In particular, the conditional probability that an analyst's forecast overshoots actual earnings per share (EPS) in the direction away from the consensus is 0.59. Analysts exhibit a contrarian bias both when they have positive as well as negative private information relative to the consensus: Forecasts that fall short of the consensus fall short of EPS 63% of the time, while those that exceed the consensus exceed EPS 56% of the time. Noteworthy, overshooting rates are remarkably stable, varying by less than 5 percentage points across analyst order (second, third, last, etc.) and by less than 7 percentage points across years and analyst coverage, despite large variations in common earnings shocks. As a check of the consistency of our inferences we also look at forecast revisions.3 Overshooting frequencies in revised forecasts, too, are consistent with strategic anti-herding behavior. We then use a Monte Carlo analysis to back out the overshooting bias that reconciles our overshooting frequencies. We find that analysts overshoot their best estimate of earnings, introducing an overshooting bias equal to about 20% of the forecast-consensus difference (normalized by share price). This suggests that the extent of contrarian bias in forecasts is economically large.

Most past attempts at detecting forecast herding did so by estimating the deviation of each forecast from the mean of all forecasts reported in the forecasting cycle (see, e.g., Hong et al., 2000; and Lamont, 2002). However, there are concerns with this testing strategy: It does not account for correlation in information, unforecasted earnings shocks, or information arrival. Using a data set of analyst recommendations, Welch (2000) finds that the prevailing consensus and the two most recent revisions have a positive significant influence on the next analyst's recommendation. Welch's findings reveal that it is important to account for the fact that recent revisions contain useful information that other analysts incorporate into their recommendations. Keane and Runkle (1998) attempt to control for correlated forecast errors and conclude that professional stock market analysts issue unbiased forecasts. Our findings suggest that this conclusion is misplaced. We conjecture that the lack of controls for signal correlation in Keane and Runkle's test may explain their failure to uncover the large contrarian biases we document (see also Zitzewitz, 2001).

Our paper adds to a stream of research challenging the prevalent view that analysts systematically herd. Using a regression-based approach, Zitzewitz (2001) backs out empirical estimates of the forecast bias function that suggest that financial analysts do not herd. Bernhardt et al. (2004) consider the strategic behavior of analysts who report last in the forecasting cycle. Using the test static developed in this paper, and regression methods similar to Zitzewitz's, they find evidence consistent with their theoretical model of relative performance evaluation and incentives for anti-herding. More recently, Chen and Jiang (2006) propose a frequency test that estimates the probability that the forecast error has the same sign as the forecast-consensus difference. Their sign test displays some basic similarities with the test we develop. Unfortunately, as we show, systematic pessimism or optimism bias their test statistic toward their anti-herding findings, as do unforecasted common earnings shocks or systematic deviations in the earnings that the econometrician observes relative to what analysts forecast.

The remainder of the paper is organized as follows. Section 2 develops our testing methodology. Section 3 presents our empirical findings and considers their robustness. Section 4 concludes.

Section snippets

Do analysts herd?

An analyst has access to information uncovered through his own research, as well as public information released by the firm and forecasts by earlier analysts. The analyst uses all of this information to update and form a posterior distribution over earnings. An analyst's forecast is unbiased if the forecast is equal to his posterior estimate of the median or mean of earnings per share. Such an unbiased forecast—one that incorporates all available information—is the forecast of greatest value to

Empirical analysis

We now implement various tests of herding/anti-herding using the S statistic developed above. First, however, we describe our data selection process.

Conclusion

This paper develops a robust frequency test for herding and anti-herding biases in professional analysts’ earnings forecasts. Detecting such forecast bias is difficult because analysts rely on common sources of information and are surprised by the same events. Our test is designed to be robust to correlated signals among analysts, common unforecasted shocks to earnings, information arrival, the possibility that the measure of earnings that analysts forecast may differ from the earnings that the

References (17)

There are more references available in the full text version of this article.

Cited by (174)

  • Bayesian herd detection for dynamic data

    2024, International Journal of Forecasting
  • Differing behaviours of forecasters of UK GDP growth

    2023, International Journal of Forecasting
  • Herding in foreign direct investment

    2023, International Review of Financial Analysis
  • The effect of social dynamics in online review voting behavior

    2022, Journal of Retailing and Consumer Services
View all citing articles on Scopus

We are grateful to the Institutional Brokers Estimate System (I/B/E/S), a service of I/B/E/S International Inc. for providing data on analyst forecasts. We thank Long Chen, Roger Koenker, Pat O’Brien, and Selim Tepaloglu for their useful comments and suggestions, and Kofi Laing for help with our Perl program. Comments from seminar participants at Yale University, University of Rochester, University of Southern California, University of Colorado, University of Waterloo, University of Illinois, Claremont-McKenna College, University of British Columbia, Simon Fraser University, and the Federal Reserve Bank of Chicago are also appreciated. The usual disclaimer applies.

View full text