Elsevier

Economics Letters

Volume 55, Issue 2, 29 August 1997, Pages 241-246
Economics Letters

Estimating the arbitrage pricing theory with observed macro factors

https://doi.org/10.1016/S0165-1765(97)00055-4Get rights and content

Abstract

Analytical results are presented that drastically simplify joint estimation of the APT by Non-linear SUR and maximum likelihood when the factors are measured. Four alternative analytical expressions for calculating standard errors are also presented. © 1997 Elsevier Science S.A.

Introduction

Estimation of the arbitrage pricing theory (APT) with observed macroeconomic factors has long been hampered by the large number of parameters to be estimated and the non-linearities inherent in the model. McElroy et al. (1985) (MBW) first detailed an approach to jointly estimate all the parameters of the APT using Gallant (1987) method of Iterated Non-linear Seemingly Unrelated Regressions (ITNLSUR), but many authors still use the two-step regression method based on Fama, Macbeth (1973) because the approach as presented by MBW is difficult to implement, requiring numerical optimization over a very large number of parameters.

This paper presents analytical results which greatly simplifies joint estimation of the APT by either the method of ITNLSUR or quasi full information maximum likelihood (QFIML).1 The intuition behind the procedure is as follows: if the “prices of risk” common to all securities are known a priori, then the APT is a system of Seemingly Unrelated Regressions (SUR), with closed form solutions for estimates of the “factor loadings” and the covariances. Using these closed form solutions, we derive the objective functions under QFIML and ITNLSUR as functions of the prices of risk only. For a model with 50 assets and 5 factors, this reduces the objective function under ITNLSUR from a dimension of 255 to a dimension of 5. We also derive analytical expressions for the standard errors under ITNLSUR and QFIML.

Section snippets

Model

We set-up the APT following McElroy, Burmeister (1988). Let yt be an N×1 vector of excess returns; let ft be a K×1 vector of factor realizations; let B be an N×K matrix of factor sensitivities; and let λ be a K×1 vector of risk prices common to all securities. It is assumed thatEt[ft]=0 Et[ftft′]=Γ Et[ϵt|ft]=0 Et[ϵtϵt′]=H,where Et is the expectation operator conditional on the information set at the beginning of time t, which includes returns dated t−1 and earlier. In general, H is diagonal,

Full information maximum likelihood estimation

If yt is distributed multivariate normal with mean Bxt(λ) and variance H, then the sample log-likelihood is the sum of the log of the conditional densities of y1, y2,…, yT and is given byL(λ,B,H)=t=1Tlt=−1/2NT·ln(2π)−T·ln|H−1|+t=1T[ytBxt(λ)]′H−1[ytBxt(λ)].The full information maximum likelihood estimates (MLEs) are those values of λ, B and H that maximize the sample log-likelihood (2). In general, analytic optimization is not possible because the normal equations are non-linear, and so

Non-linear seemingly unrelated regressions

MBW detail for the APT the ITNLSUR estimator of Gallant (1987), which, unlike maximum likelihood, does not require a specific model of the error distribution. By using the analytical results above, estimation by ITNLSUR can also be greatly simplified. The NLSUR estimator minimizesN(λ,B|H0)=t=1T[ytBxt(λ)]′H−10[ytBxt(λ)],where H0 is the estimated covariance matrix obtained from the residuals of an OLS regression of the returns yt on the factors ft and a constant. The first order conditions of

Inference under maximum likelihood

Under maximum likelihood inference can be carried out by the usual procedures. That is, if the standard regularity conditions are satisfied, then Ξ*−Ξ0N(0, T−1J−1), where Ξ≡[λ|vec(B)], Ξ* is the maximum likelihood estimate of the true parameter vector Ξ0 and J is Fisher's information matrix. Two asymptotically equivalent estimates of the information matrix are the mean outer-product of the score of the sample log-likelihoodĴOP=−T−1t=1TltΞlt′ΞΞ=Ξ,and the Hessian of the sample

Inference under NLSUR

Under certain regularity conditions (see, e.g., Gallant (1987)), the ITNLSUR estimator is strongly consistent and asymptotically normal withĴNL=−T−1t=1Tvec(Bxt(λ))′ΞH−1vec(Bxt(λ))ΞΞ=ΞĴNL=−T−1−TBH−1BBH−1[1′X(λ)⊗IN]−[X(λ)′1IN]H−1B−[X(λ)′X(λ)]⊗H−1.Our experience has been that standard errors calculated by any of the four methods are not substantially different.

Acknowledgements

The author thanks T. Wake Epps for helpful comments and suggestions.

References (6)

There are more references available in the full text version of this article.

Cited by (1)

View full text