Recursive computation of piecewise constant volatilities

https://doi.org/10.1016/j.csda.2010.06.027Get rights and content

Abstract

Returns of risky assets are often modelled as the product of a volatility function and standard Gaussian white noise. Long range data cannot be adequately approximated by simple parametric models. The choice is between retaining simple models and segmenting the data, or to use a non-parametric approach. There is not always a clear dividing line between the two approaches. In particular, modelling the volatility as a piecewise constant function can be interpreted either as segmentation based on the simple model of constant volatility, or as an approximation to the observed volatility by a simple function. A precise concept of local approximation is introduced and it is shown that the sparsity problem of minimizing the number of intervals of constancy under constraints can be solved using dynamic programming. The method is applied to the daily returns of the German DAX index. In a short simulation study it is shown that the method can accurately estimate the number of breaks for simulated data without prior knowledge of this number.

Section snippets

The problem

Let R(t) be the return of some risky asset in period t. For stocks with end of period price Pt, R(t)=ln(Pt/Pt1). In empirical finance, R(t) is often decomposed as R(t)=σ(t)Z(t),t=1,,n, where Z is standard Gaussian white noise. The method can be adapted to other distributional assumptions such as in Curto et al. (2009). This will be briefly discussed in Section 6. The major problem is how best to model σ(t). In the enormous ARCH-class of models, for example, σ(t) depends on past values of the

Minimizing the number of intervals

To define the modified problem let I1,,Ik{1,,n}, IνIμ= and I1Ik={1,,n}, be the intervals where σ(t) is constant, with value σIν, (σIν>0). The inequalities in Eq. (3) imply tIR(t)2χ|I|,1+αn22σIν2tIR(t)2χ|I|,1αn22,IIν,ν=1,,k. A volatility function which satisfies these constraints is called locally adequate. Local adequacy is a weaker condition than (3) and it turns out that the sparsity problem can be solved for piecewise constant locally adequate volatility functions. It

Empirical volatility and dynamic programming

The use of (8) has an important drawback. As the quantiles χ|Iν|,1αn22 are small for short intervals Iν={sν,,tν,} the squared returns R(sν)2,,R(tν)2 will in general be much closer to the lower bound σl2(Iν) than to the upper bound σu2(Iν). The choice (8) will therefore in general overestimate the empirical volatilities in the interval {sν,,tν}. In the extreme case of |Iν|=1 and with αn=0.99 the choice (8) gives σl2(Iν)0.13R(sν)2 and σu2(Iν)25445.29R(sν)2. The choice (8) is not a sensible

Minimizing the empirical quadratic deviations

In general there will not be a unique solution to the problem of minimizing the number of intervals for piecewise constant empirical volatility. A way of obtaining a unique solution is choose that minimal partition which minimizes the sum of the empirical quadratic deviations t=1n(R(t)2σ(t)2)2. Although artificial examples can be constructed where even this added restriction does not result in a unique partition this is very unlikely to happen for real data. The calculation of such a

Simulations

As mentioned in the introduction the methodology expounded above is intended to give a simple piecewise approximation to the volatility. It was not developed to detect breaks in the volatility which is a separate problem. Nevertheless it can be used to detect multiple breaks if in applications to real data sets it is kept in mind that not all breaks in the piecewise function correspond to breaks in the underlying volatility. There are many papers in the literature concerned with detecting

Extension to other distributions

So far it has been assumed that the noise Z(t) in (1) is N(0,1). This can be weakened and the N(0,1) distribution can be replaced by any other standardized (zero mean and unit variance) continuous distribution F which is strictly monotone increasing. This distribution must be fully specified but this does not mean that one has to “know” the “true” distribution. An informed choice of F can be based initially on the N(0,1) distribution to see how this must be altered to obtain better results. As F

Acknowledgements

The authors acknowledge the helpful comments and criticisms of earlier versions of the paper by two anonymous referees and the Editor. The research was supported by Deutsche Forschungsgemeinschaft (DFG). Views expressed by Christian Höhenrieder do not reflect official positions of Deutsche Bundesbank.

References (17)

  • M. Lavielle

    Detection of multiple changes in a sequence of dependent random variables

    Stochastic Processes and their Applications

    (1999)
  • E. Vassiliou et al.

    An adaptive algorithm for least squares piecewise monotonic data fitting

    Computational Statistics and Data Analysis

    (2005)
  • Andersen, T.G., Benzoni, L., 2008. Realized volatility, Federal Reserve Bank of Chicago,...
  • J. Bai et al.

    Estimating and testing linear models with multiple structural changes

    Econometrica

    (1998)
  • L. Boysen et al.

    Consistencies and rates of convergence of jump-penalized least squares estimators

    The Annals of Statistics

    (2009)
  • J.D. Curto et al.

    Modelling stock markets’ volatility using GARCH models with normal, students’ t and stable Paretian distributions

    Statistical Papers

    (2009)
  • Davies, P.L., 2005. Universal Principles, Approximation and Model Choice, Invited talk, European Meeting of...
  • Davies, P.L., 2006. Long range financial data and model choice, Technical report 21/2006, SFB 475, Universität...
There are more references available in the full text version of this article.

Cited by (19)

View all citing articles on Scopus

The algorithms suggested here were programmed in R and are available from the authors upon request.

View full text