Monetary policy in a data-rich environment

https://doi.org/10.1016/S0304-3932(03)00024-2Get rights and content

Abstract

Most empirical analyses of monetary policy have been confined to frameworks in which the Federal Reserve is implicitly assumed to exploit only a limited amount of information, despite the fact that the Fed actively monitors literally thousands of economic time series. This article explores the feasibility of incorporating richer information sets into the analysis, both positive and normative, of Fed policymaking. We employ a factor-model approach, developed by Stock, J.H., Watson, M.W., Diffusion Indices, Journal of Business & Economic Statistics 2002, 20 (2) 147, Forecasting Inflation, 1999, Journal of Monetary Economics 44 (2) 293, that permits the systematic information in large data sets to be summarized by relatively few estimated factors. With this framework, we reconfirm Stock and Watson's result that the use of large data sets can improve forecast accuracy, and we show that this result does not seem to depend on the use of finally revised (as opposed to “real-time”) data. We estimate policy reaction functions for the Fed that take into account its data-rich environment and provide a test of the hypothesis that Fed actions are explained solely by its forecasts of inflation and real activity. Finally, we explore the possibility of developing an “expert system” that could aggregate diverse information and provide benchmark policy settings.

Introduction

Monetary policy-makers are inundated by economic data. Research departments throughout the Federal Reserve System, as in other central banks, monitor and analyze literally thousands of data series from disparate sources, including data at a wide range of frequencies and levels of aggregation, with and without seasonal and other adjustments, and in preliminary, revised, and “finally revised” versions. Nor is exhaustive data analysis performed only by professionals employed in part for that purpose; observers of Alan Greenspan's chairmanship, for example, have emphasized his own meticulous attention to a wide variety of data series (Beckner, 1996).

The very fact that central banks bear the costs of analyzing a wide range of data series suggests that policy-makers view these activities as relevant to their decisions. Indeed, recent econometric analyses have confirmed the longstanding view of professional forecasters, that the use of large number of data series may significantly improve forecasts of key macroeconomic variables (Stock and Watson (2002), Stock and Watson (1999); Watson, 2000). Central bankers’ reputations as data fiends may also reflect motivations other than minimizing average forecast errors, including multiple and shifting policy objectives, uncertainty about the correct model of the economy, and the central bank's political need to demonstrate that it is taking all potentially relevant factors into account.1

Despite this reality of central bank practice, most empirical analyses of monetary policy have been confined to frameworks in which the Fed is implicitly assumed to exploit only a limited amount of information. For example, the well-known vector autoregression (VAR) methodology, used in many recent attempts to characterize the determinants and effects of monetary policy, generally limits the analysis to eight macroeconomic time series or fewer.2 Small models have many advantages, including most obviously simplicity and tractability. However, we believe that this divide between central bank practice and most formal models of the Fed reflects at least in part researchers’ difficulties in capturing the central banker's approach to data analysis, which typically mixes the use of large macroeconometric models, smaller statistical models (such as VARs), heuristic and judgmental analyses, and informal weighting of information from diverse sources. This disconnect between central bank practice and academic analysis has, potentially, several costs: First, by ignoring an important dimension of central bank behavior and the policy environment, econometric modeling and evaluation of central bank policies may be less accurate and informative than it otherwise would be. Second, researchers may be foregoing the opportunity to help central bankers use their extensive data sets to improve their forecasting and policymaking. It thus seems worthwhile for analysts to try to take into account the fact that in practice monetary policy is made in a “data-rich environment”.

This paper is an exploratory study of the feasibility of incorporating richer information sets into the analysis, both positive and normative, of Federal Reserve policy-making. Methodologically, we are motivated by the aforementioned work of Stock and Watson. Following earlier work on dynamic factor models,3 Stock and Watson have developed dimension reduction schemes, akin to traditional principal components analysis, that extract key forecasting information from “large” data sets (i.e., data sets for which the number of data series may approach or exceed the number of observations per series). They show, in simulated forecasting exercises, that their methods offer potentially large improvements in the forecasts of macroeconomic time series, such as inflation. From our perspective, the Stock–Watson methodology has several additional advantages: First, it is flexible, in the sense that it can potentially accommodate data of different vintages, at different frequencies, and of different spans, thus replicating the use of multiple data sources by central banks. Second, their methodology offers a data-analytic framework that is clearly specified and statistically rigorous but remains agnostic about the structure of the economy. Finally, although we do not take advantage of this feature here, their method can be combined with more structural approaches to improve forecasting still further (Stock and Watson, 1999).

The rest of our paper is structured as follows. Section 2 extends the research of Stock and Watson by further investigating the value of their methods in forecasting measures of inflation and real activity (and, by extension, the value of those forecasts as proxies for central bank expectations). We consider three alternative data sets: first, a “real-time” data set, in which the data correspond closely to what was actually observable by the Fed when it made its forecasts; second, a data set containing the same time series as the first but including only finally revised data; and third, a much larger, and revised, data set based on that employed by Stock and Watson (2002). We compare forecasts from these three data sets with each other and with historical Federal Reserve forecasts, as reported in the Greenbook. We find, in brief, that the scope of the data set (the number and variety of series included) matters very much for forecasting performance, while the use of revised (as opposed to real-time) data seems to matter much less. We also find that “combination” forecasts, which give equal weight to our statistical forecasts and Greenbook forecasts, can sometimes outperform Greenbook forecasts alone.

In Section 3 we apply the Stock–Watson methodology to conduct a positive analysis of Federal Reserve behavior. Specifically, we estimate monetary policy reaction functions, or PRFs, which relate the Fed's instrument (in this article, the fed funds rate) to the state of the economy, as determined by the full information set. Our interest is in testing formally whether the Fed's reactions to the state of the economy can be accurately summarized by a forward-looking Taylor rule of the sort studied by Battini and Haldane (1999) and Clarida et al (1999), Forni and Reichlin (1996), among others; or whether, as is sometimes alleged, the Fed responds to variables other than expected real activity and expected inflation. We show here that application of the Stock–Watson methodology to this problem provides both a natural specification test for the standard forward-looking PRF, as well as a nonparametric method for studying sources of misspecification.

Section 4 briefly considers whether the methods employed in this paper might not eventually prove useful to the Fed in actual policy-making. In particular, one can imagine an “expert system” that receives data in real time and provides a consistent benchmark estimate of the implied policy setting. To assess this possibility, we conduct a counterfactual historical exercise, in which we ask how well monetary policy would have done if it had relied mechanically on SW forecasts and some simple policy reaction functions. Perhaps not surprisingly, though our expert system performs creditably, it does not match the record of human policy-makers. Nevertheless, the exercise provides some interesting results, including the finding that the inclusion of estimated factors in dynamic models of monetary policy can mitigate the well-known “price puzzle”, the common finding that changes in monetary policy seem to have perverse effects on inflation. Section 5 concludes by discussing possible extensions of this research.

Section snippets

Forecasting in a data-rich environment: some further results

Stock and Watson (2002), Stock and Watson (1999), henceforth SW, have shown that dynamic factor methods applied to large data sets can lead to improved forecasts of key macroeconomic variables, at least in simulated forecasting exercises. In this section we investigate three issues relevant to the applications we have in mind. First, we seek to determine whether the SW results are sensitive to the use of “real-time”, rather than finally revised data. Second, we ask whether data sets containing

Estimating the Fed's policy reaction function in a data-rich environment

In this section we apply the Stock–Watson methodology to a positive analysis of Federal Reserve behavior. We model the Fed's behavior by a policy reaction function (PRF), under which a policy instrument is set in response to the state of the economy, as measured by the estimated factors.

The standard practice in much recent empirical work has been to use tightly specified PRFs, such as the so-called Taylor rule (Taylor, 1993). According to the basic Taylor rule, the Fed moves the fed funds rate (

Toward a real-time expert system for monetary policymaking

Section 2 of this paper discussed the potential value of SW methods for forecasting, using large, real-time data sets. Section 3 estimated policy reaction functions, which take as inputs forecasts of target variables like inflation and real activity and produce implied policy settings as outputs. Putting these two elements together suggests the intriguing possibility of designing a real-time “expert system” for monetary policymaking. In principle this system could assimilate the information

Conclusion

Positive and normative analyses of Federal Reserve policy can be enhanced by the recognition that the Fed operates in a data-rich environment. In this preliminary study, we have shown that methods for data-dimension reduction, such as those of Stock and Watson, can allow us to incorporate large data sets into the study of monetary policy.

A variety of extensions of this framework are possible, of which we briefly mention only two. First, the estimation approach used here identifies the

References (23)

  • J.H. Stock et al.

    Forecasting Inflation

    Journal of Monetary Economics

    (1999)
  • J.B. Taylor

    Discretion versus policy rules in practice

    Carnegie Rochester Conference Series on Public Policy

    (1993)
  • J. Bai et al.

    Determining the number of factors in approximate factor models

    Econometrica

    (2002)
  • Battini, N., Haldane, A.G., 1999. Forward-looking rules for monetary policy. In: Taylor, J.B. (Ed.), Monetary Policy...
  • S.K. Beckner

    Back From the BrinkThe Greenspan Years

    (1996)
  • Boivin, J., 2001. The Fed's conduct of monetary policy: has it changed and does it matter? Columbia University,...
  • Christiano, L., Eichenbaum, M., Evans, C., 2000. Monetary policy shocks: what have we learned and to what end? In:...
  • R. Clarida et al.

    The science of monetary policya new Keynesian perspective

    Journal of Economic Literature

    (1999)
  • R. Clarida et al.

    Monetary policy rules and macroeconomic stabilityevidence and some theory

    Quarterly Journal of Economics

    (2000)
  • Croushore, D., Stark, T., 2002. A real-time data set for macroeconomists: does the data vintage matter? Review of...
  • M. Forni et al.

    Dynamic common factors in large cross-sections

    Empirical Economics

    (1996)
  • Cited by (339)

    • Factor-augmented forecasting in big data

      2024, International Journal of Forecasting
    • The sources of economic uncertainty: Evidence from eurozone markets

      2023, Journal of Multinational Financial Management
    View all citing articles on Scopus

    Prepared for a conference on “Monetary Policy Under Incomplete Information”, Gerzensee, Switzerland, October 12–14, 2000. Min Wei and Peter Bondarenko provided able research assistance. We thank Mark Watson and our discussants, Ignazio Angeloni and Harald Uhlig, for helpful comments, and Dean Croushore for assistance with the Greenbook data. This research was funded in part by NSF grant SES-0001751.

    View full text