Value-at-Risk under Lévy GARCH models: Evidence from global stock markets

https://doi.org/10.1016/j.intfin.2016.08.008Get rights and content

Highlights

  • The performance of twenty-three VaR models is investigated.

  • We implement stringent backtesting for model validation during crisis and post-crisis.

  • We find strong support for the use of Lévy distributions during crisis period.

  • Normal models are economically efficient during market calm.

Abstract

The aim of this paper is to reconsider the evidence on the forecasting ability of GARCH-type models in estimating the Value-at-Risk (VaR) of global stock market indices with improved return distribution. The performance of twenty-one VaR models that are generated by a combination of three conditional volatility specifications including GARCH, GJR and FIGARCH and seven distributional assumptions for return innovations is investigated. We implement stringent backtesting during crisis and post-crisis periods for developed, emerging and frontier markets. Results show that the skewed-t along with heavy-tailed Lévy distributions considerably improve the forecasts of one-day-ahead VaR for long and short trading positions during crisis period, regardless of the volatility model. However, we find no evidence that a given volatility specification outperforms the others across markets. The relevant models show evidence of long memory in developed markets and conditional asymmetry in frontier markets; whereas the standard GARCH is found to be the best suited specification for estimating VaR forecasts in emerging markets. The inclusion of high volatility period in the estimation sample highlights the predictability of VaR during post-crisis period, where even the normal distribution rivals the more sophisticated ones in terms of statistical accuracy and regulatory capital allocation.

Introduction

The Value-at-Risk (VaR) concept has been established as an industry standard measure of market risk. It provides financial institutions with information on the expected worst loss over a target horizon at a given confidence level. Despite its importance and simplicity, there is no universally accepted method to compute the VaR of a particular portfolio, while different models may lead to significantly different risk measures (see Kuester et al., 2006, McMillan and Kambouroudis, 2009, among others), a main concern in the estimation of market risk with the VaR method is the choice of the appropriate model, e.g., a misspecified model may turn out to be costly for the risk manager, and leads to inaccurate risk estimation. Moreover, the extreme losses experienced by financial institutions during the recent global financial crisis, triggered by the U.S. subprime mortgage debacle of 2007–2008, have raised questions about the reliability of the implemented risk models. These questions have a direct bearing on the debate amongst financial industry, regulators and academicians over probabilistic market models for VaR forecasting, which are capable to properly account for extreme events and increased volatility during financial market downturns.

In obtaining accurate VaR measures, the prediction of future market volatility is of paramount importance, particularly in view of its time-varying nature as well as some prominent stylized facts of stock returns (Cont, 2001). Indeed, there is ample empirical evidence that spells of small amplitude for the price variations alternate with spells of large amplitude; one calls this feature volatility clustering. Numerous econometric models have been suggested to capture the volatility clustering effect, the most widely used one is the GARCH model (Bollerslev, 1986). The normal distribution arising from the Brownian motion assumption as a benchmark process for describing return innovations have dominated former GARCH-based VaR models drawing criticisms that such distributional assumption may not sufficiently capture the frequency of extreme shocks to asset prices, as well as the amplitude of these shocks, and usually leads to risk underestimation. The introduction of more flexible distributions, allowing for skewness and heavy tails into return modeling, exonerates GARCH-type models from such criticisms (Bao et al., 2007, BenSaïda, 2015).

Recently, enhanced conditional volatility models with Lévy distributions have emerged with a view towards improved tail modeling. Examples include the normal inverse Gaussian (Forsberg and Bollerslev, 2002, Venter and de Jongh, 2002, Broda and Paolella, 2009, Wilhelmsson, 2009), the multivariate generalized hyperbolic distribution (McNeil et al., 2005) and used in a GARCH context by Paolella and Polak, 2015a, Paolella and Polak, 2015b, the Meixner distribution (Grigoletto and Provasi, 2008), α-stable and tempered stable distributions (Mittnik and Paolella, 2003, Broda et al., 2013, Kim et al., 2008, Kim et al., 2011). The current study pursues the use of Lévy distributions as statistical tools for advanced risk modeling.

More concretely, our objective is to examine the suitability of univariate GARCH-type models in modeling conditional volatility and VaR under different assumptions of error distribution for global stock market indices.1 Within the class of conditional volatility models, we employ the standard GARCH model, GJR (Glosten et al., 1993) and FIGARCH (Baillie et al., 1996). Four infinitely divisible distributions arising from popular Lévy processes are considered, namely, the Variance Gamma (Madan et al., 1998), the CGMY (Carr et al., 2002), the normal inverse Gaussian (Barndorff-Nielsen, 1997), and the Meixner distribution (Schoutens, 2001).

The forecasting performance of these models is discussed and compared to the benchmark normal GARCH model, fat-tailed time series models using the Student’s t and the skewed-t distribution of Hansen (1994). We also consider two higly competitive models, namely, the mixed normal GARCH and the fast Asymmetric Power ARCH model driven by noncentral t innovations, recently introduced by Krause and Paolella (2014). We empirically test the accuracy of VaR estimates during high volatility period for the MSCI World, MSCI emerging markets and MSCI frontier markets indices by means of an extensive backtesting exercise that includes the frequency-based test (Kupiec, 1995), independence tests (Christoffersen, 1998, Engle and Manganelli, 2004), duration-based tests (Candelon et al., 2011) and a test that jointly accounts for the frequency and the magnitude of VaR exceedances, recently introduced by Colletaz et al. (2013). Such backtesting strategy encompasses almost all risk model validation techniques in the literature, and provides a meaningful framework to evaluate the accuracy of VaR models. Furthermore, we evaluate the economic importance of our results by computing daily capital requirements under the Basel II Accord (Basel Committee on Banking Supervision, 2006).

This paper distinguishes itself from the relevant literature in several ways. (1) It allows for direct comparisons to be made on the performance of a broad set of GARCH-type specifications combined with non-normal Lévy distributions for gauging and managing market risk. Although, the option pricing literature establishes Lévy processes as quite suitable in capturing the behavior of returns innovation; other evaluation metrics and discussions on their applicability as risk management tools are needed. (2) Our empirical assessment of the performance of competing VaR models not only allows to test whether VaR forecasts are misspecified overall, but also identifies how they are misspecified by looking into which individual hypothesis is rejected. The hypotheses induced by the employed backtesting methods have different economic implications. Financial institutions focus on the frequency test since the empirical number of violations (i.e., when the realized losses exceed the VaR values) is used to determine their market risk minimum capital requirements. Even in practice, market participants and regulators do care about the number of VaR violations, they often have a more stringent requirement on the magnitude of their losses beyond VaR, as such quantity is not given by any VaR model and represents the main conceptual deficiency of VaR. The duration independence test considers whether the VaR reflects the time-varying nature of risk. In particular, clustered violations may signal that the VaR model reacts too slowly to changes in market conditions, and hence may induce solvency issues.

Further, to the best of our knowledge, no published article examines all of these practically important issues in the backtesting of VaR estimates. (3) Until recently, a substantial literature of empirical applications narrowly focuses on the computation of the VaR on the left tail of the distribution which corresponds to the long position risk. The short position risk, which is theoretically unlimited, has been neglected and only a few papers have attempted to jointly account for both long and short position risk in equity markets (Giot and Laurent, 2003, So and Yu, 2006, Tang and Shieh, 2006, Diamandis et al., 2011). In our empirical analysis, the performance of VaR models is also examined in terms of the critical issue of model consistency between long and short position risk.

The main contributions of this study are the following. First, our results corroborate the calls for the use of more realistic assumptions in financial modeling. In model estimation, it is shown that allowing for leptokurtic and skewed return distributions significantly improves the fit of conditional volatility models. Second, even though, non-normal Lévy distributions do provide more accurate VaR estimates than those generated by both normal and Student’s t distributions, conditional volatility specification bears on the accuracy of VaR as none of the three GARCH-type models outperform the others across markets. The empirical evidence favors non-normal Lévy-based FIGARCH models in developed markets, whereas the GJR and the standard GARCH models are preferred for the frontier and emerging markets, respectively. Third, this paper offers important implications regarding the recent global financial crisis with respect to the estimation of VaR for developed, emerging and frontier markets. In general, the forecasting performance of the VaR models deteriorates during crises periods, predominantly in the case of developed markets. For emerging and frontier markets, nevertheless, the forecasting performance is less affected. Fourth, the inclusion of the crisis period in the estimation sample, significantly improves the backtesting performance of the competing VaR models during the post-crisis period. Even though, normal time series models rival the more flexible non-normal Lévy models. However, our findings provide compelling evidence for the inadequacy of the Student’s t distribution, as it tends to produce overly conservative risk measures during both high and low volatility periods.

The remainder of the paper is organized as follows. Section 2 outlines the basic concept of VaR and presents the employed time series models. Section 3 presents the alternative non-normal Lévy distributions. Section 4 describes the data and discusses the results of the empirical investigation. Section 5 summarizes the main findings of the paper.

Section snippets

VaR and volatility specifications

The VaR concept has emerged as the most prominent measure of downside market risk. It places an upper bound on losses at a given confidence level over a given forecast horizon. Thus, assuming that the VaR model is correct, realized losses will exceed the VaR threshold with only a small target probability α, typically chosen between 1% and 5%. More specifically, conditional on the information until time t-h, the VaR (for long position) on time t of one unit of investment is the α-quantile of the

Modeling return innovations with Lévy distributions

A Lévy process is a continuous time stochastic process with stationary independent increments, analogous to i.i.d. innovations in a discrete-time setting. The simplest continuous-time stochastic process is the Brownian motion which generates normally distributed innovations. The introduction of Lévy processes into financial modeling provides a flexible framework to replace the normal distribution by more sophisticated infinitely divisible ones. More specifically, while the Brownian motion

Data and preliminary analysis

The data for this study consists of three global stock market indices, including MSCI Emerging Markets (EM), MSCI Frontier Markets (FM) and MSCI World designed to measure the equity market performance of developed markets. All the data are obtained from Thomson Reuters Eikon.9 For the MSCI World and MSCI EM indices, the sample comprises 20 years of daily data from January 3, 1995 to March 10, 2015, and covers the

Conclusion

Risk assessment is an important and complex task faced by market regulators and financial institutions, especially after the last subprime crisis. It is argued that since market data is endogenous to market behavior, statistical analysis made in times of stability does not provide much guidance in times of crisis. Consequently, we conduct our empirical investigation on the accuracy of parametric VaR models during stressed financial markets with stringent view of VaR models’ performance in terms

References (77)

  • D.N. Dimitrakopoulos et al.

    Value at risk models for volatile emerging markets equity portfolios

    Quart. Rev. Econ. Finance

    (2010)
  • Z. Ding et al.

    A long memory property of stock market returns and a new model

    J. Empirical Finance

    (1993)
  • P. Giot et al.

    Modelling daily value-at-risk using realized volatility and ARCH type models

    J. Empirical Finance

    (2004)
  • M. Haas et al.

    Time-varying mixture GARCH models and asymmetric volatility

    North Am. J. Econ. Finance

    (2013)
  • Y.S. Kim et al.

    Financial market models with Levy processes and time-varying volatility

    J. Banking Finance

    (2008)
  • Y.S. Kim et al.

    Time series analysis for financial market meltdowns

    J. Banking Finance

    (2011)
  • D. McMillan et al.

    Are riskmetrics forecasts good enough? Evidence from 31 stock markets

    Int. Rev. Financ. Anal.

    (2009)
  • R.C. Merton

    Option pricing when underlying stock returns are discontinuous

    J. Financ. Econ.

    (1976)
  • M.S. Paolella et al.

    COMFORT: a common market factor non-Gaussian returns model

    J. Econom.

    (2015)
  • M.K.P. So et al.

    Empirical analysis of GARCH models in value at risk estimation

    J. Int. Financ. Markets Inst. Money

    (2006)
  • T.L. Tang et al.

    Long memory in stock index futures markets: a value-at-risk approach

    Physica A

    (2006)
  • D. Ziggel et al.

    A new set of improved value-at-risk backtests

    J. Banking Finance

    (2014)
  • C. Alexander

    Market Risk Analysis

    (2008)
  • C. Alexander et al.

    Normal mixture GARCH(1,1): applications to exchange rate modelling

    J. Appl. Econom.

    (2006)
  • Y. Bao et al.

    Evaluating predictive performance of value-at-risk models in emerging markets: a reality check

    J. Forecasting

    (2006)
  • Y. Bao et al.

    Comparing density forecast models

    J. Forecasting

    (2007)
  • Barndorff-Nielsen, O.E., 1995. Normal Inverse Gaussian Distributions and Stochastic Volatility Modelling. Tech. Rep....
  • O.E. Barndorff-Nielsen

    Normal inverse Gaussian distributions and stochastic volatility modelling

    Scand. J. Stat.

    (1997)
  • Basel Committee on Banking Supervision, 2006. Basel II: International Convergence of Capital Measurement and Capital...
  • Basel Committee on Banking Supervision, 2009. Revisions to the Basel II Market Risk Framework: Final version, Tech....
  • M.L. Bianchi et al.

    Tempered stable distributions and processes in finance: numerical analysis

  • C. Bontemps et al.

    Testing distributional assumptions: a GMM approach

    J. Appl. Econom.

    (2012)
  • S.I. Boyarchenko et al.

    Option pricing for truncated Lévy processes

    Int. J. Theor. Appl. Finance

    (2000)
  • S.A. Broda et al.

    CHICAGO: a fast and accurate method for portfolio risk calculation

    J. Financ. Econom.

    (2009)
  • S. Campbell

    A review of backtesting and backtesting procedures

    J. Risk

    (2007)
  • B. Candelon et al.

    Backtesting value-at-risk: a GMM duration-based test

    J. Financ. Econom.

    (2011)
  • P. Carr et al.

    The fine structure of asset returns: an empirical investigation

    J. Bus.

    (2002)
  • P. Christoffersen

    Evaluating interval forecasts

    Int. Econ. Rev.

    (1998)
  • Cited by (29)

    • A high-frequency approach to VaR measures and forecasts based on the HAR-QREG model with jumps

      2022, Physica A: Statistical Mechanics and its Applications
      Citation Excerpt :

      It is the maximum possible loss of a financial asset and its portfolio over a given future holding period for a given level of confidence (usually 95% or 99%) under normal stochastic fluctuations of an efficient market. VaR can be used for risk control, performance assessment, estimating risk-based capital, etc., such as in Basel III capital reform and risk in national markets such as banks, equities, securities and commodities [1–6]. VaR assesses tail losses under normal market fluctuations and its valid data estimation depends on the success of predicting the conditional distribution of the return series, especially the left tail of the distribution.

    • Financial Risk Meter for emerging markets

      2022, Research in International Business and Finance
      Citation Excerpt :

      These observations on the joint network dynamics motivated practitioners and researchers to insert tail events into risk management. The Value-at-Risk (VaR) approach (Franke et al., 2019) is frequently used to measure market risk, by computing the monetary loss of an institution for a given confidence level (Slim et al., 2017). However, the VaR measures a tail event probability hosting only one single node, which does not reflect its connection to overall systemic risk.

    • Connectedness between cryptocurrencies and foreign exchange markets: Implication for risk management

      2021, Journal of Multinational Financial Management
      Citation Excerpt :

      The results in Table 5 show that we cannot reject the null hypothesis that the C-vine and D-vine are statistically equivalent. To explore the performance of vine copulas in modeling the VaR and ES, we simulate 10,000 jointly dependent uniform variates from the estimated C-vine and D-vine copulas for two types of equally-weighted portfolios (see, Slim et al., 2017, for an extensive review on VaR and ES). The first wallet consists only of three conventional currencies, while in the second wallet we add the three cryptocurrencies to the fiat currencies.

    • Long-term prediction of the metals’ prices using non-Gaussian time-inhomogeneous stochastic process

      2020, Physica A: Statistical Mechanics and its Applications
      Citation Excerpt :

      Thus, it can be considered as the general class useful for modeling light- and heavy-tailed data. The models based on the SGT distribution were used in modeling of different phenomena, [25–29]. The second property that we observe in the financial data is, apart from the non-Gaussian behavior, its in-homogeneous character.

    • How informative are variance risk premium and implied volatility for Value-at-Risk prediction? International evidence

      2020, Quarterly Review of Economics and Finance
      Citation Excerpt :

      Of course, the VaR forecasting is not limited to the volatility specification, and it is commonly recognized that distributional assumptions play a key role in obtaining accurate models. The empirical evidence has demonstrated that the introduction of flexible distributions, allowing for skewness and heavy tails, greatly enhances the predictive performance of GARCH-type models (Bao, Lee, & Saltoglu, 2007; Slim, Koubaa, & BenSaïda, 2017). Therefore, we select the skewed-t distribution in an attempt to jointly account for both long and short positions risk in equity markets.

    View all citing articles on Scopus
    View full text