Next Article in Journal
Book Review for “Credit Default Swap Markets in the Global Economy” by Go Tamakoshi and Shigeyuki Hamori. Routledge: Oxford, UK, 2018; ISBN: 9781138244726
Next Article in Special Issue
Statistical Arbitrage with Mean-Reverting Overnight Price Gaps on High-Frequency Data of the S&P 500
Previous Article in Journal
Are There Any Volatility Spill-Over Effects among Cryptocurrencies and Widely Traded Asset Classes?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Between ℙ and ℚ: The ℙ Measure for Pricing in Asset Liability Management

1
Ortec Finance, 3011 XB Rotterdam, The Netherlands
2
DIAM—Delft Institute of Applied Mathematics, Delft University of Technology, 2628 CD Delft, The Netherlands
3
CWI—The Center for Mathematics and Computer Science, 1098 XG Amsterdam, The Netherlands
*
Author to whom correspondence should be addressed.
J. Risk Financial Manag. 2018, 11(4), 67; https://doi.org/10.3390/jrfm11040067
Submission received: 17 September 2018 / Revised: 18 October 2018 / Accepted: 21 October 2018 / Published: 24 October 2018
(This article belongs to the Special Issue Computational Finance)

Abstract

:
Insurance companies issue guarantees that need to be valued according to the market expectations. By calibrating option pricing models to the available implied volatility surfaces, one deals with the so-called risk-neutral measure Q , which can be used to generate market consistent values for these guarantees. For asset liability management, insurers also need future values of these guarantees. Next to that, new regulations require insurance companies to value their positions on a one-year horizon. As the option prices at t = 1 are unknown, it is common practice to assume that the parameters of these option pricing models are constant, i.e., the calibrated parameters from time t = 0 are also used to value the guarantees at t = 1 . However, it is well-known that the parameters are not constant and may depend on the state of the market which evolves under the real-world measure P . In this paper, we propose improved regression models that, given a set of market variables such as the VIX index and risk-free interest rates, estimate the calibrated parameters. When the market variables are included in a real-world simulation, one is able to assess the calibrated parameters (and consequently the implied volatility surface) in line with the simulated state of the market. By performing a regression, we are able to predict out-of-sample implied volatility surfaces accurately. Moreover, the impact on the Solvency Capital Requirement has been evaluated for different points in time. The impact depends on the initial state of the market and may vary between −46% and +52%.

Graphical Abstract

1. Introduction

Liabilities of insurance companies depend on the fair value of the outstanding claims that typically involve guarantees (that are also called embedded options). The market consistent value of these guarantees is defined under the risk-neutral measure Q , i.e., they are then computed with pricing formulas that agree on the current implied volatility surfaces. To hedge against the risks involved in these claims, insurers often acquire (complex) option portfolios that also require market consistent risk-neutral valuation themselves. Furthermore, on 1 January 2016 the so-called Solvency II directive came into effect which introduced the Solvency Capital Requirement (SCR). The SCR is defined as the minimum amount of capital which should be held by an insurer, such that the insurer is able to pay its claims over a one-year horizon with a 99.5% probability. The regulator demands that the insurer’s available capital should be greater than, or equal to, the SCR. Because the claims typically depend on future market consistent valuation, computing the SCR, and, more generally performing proper Asset Liability Management (ALM), is a challenging task.
To compute these future market consistent values of the embedded options, insurers require the probability distribution of the values of these embedded options. Typically, this is done by simulating a large number of random future states of the market and, after that, the different states are valued under the market consistent risk-neutral measure. From the simulated embedded option values, the desired statistics can then be extracted. The future states of the market can be computed by means of risk-neutral models ( Q in Q ), or real-world models ( Q in P ). Risk-neutral simulations are, for example, used to calculate the Credit Value Adjustment (CVA) (see, e.g., Pykhtin (2012)), which is a traded quantity and should therefore be computed using no-arbitrage arguments. For quantities that are not traded (or hedged), the Q in Q approach appears to be incorrect (see, e.g., Stein (2016)) and the future state of the market should be modelled using the real-world measure ( Q in P ). Real-world models are calibrated to the observed historical time-series and are typically used to compute non-traded quantities such as Value-at-Risk (VaR).
The risk-neutral measure at t = 0 is connected to the observed implied volatility surface and is therefore well-defined. However, the definition of the risk-neutral measure at future time t = 1 is debatable. Despite some relevant research on predicting the implied volatility surfaces (see, e.g., Cont et al. (2002), Mixon (2002) and Audrino and Colangelo (2010)), it is common practice to use option pricing models that are only calibrated at time t = 0 , thereby assuming that the risk-neutral measure is independent of the state of the market (see, for example, Bauer et al. (2010) and Devineau and Loisel (2009)). This is however not in line with historical observations, where we see that the implied volatility surface does depend on the state of the market. Another drawback of this approach is that the resulting SCR is pro-cyclical, i.e., the SCR is relatively high when the market is in crisis and relatively low when the market is stable. The undesired effect of pro-cyclicality is that it can aggravate a downturn Bikker and Hu (2015).
In this paper, we investigate the impact of relaxing the assumption that the risk-neutral measure is considered to be independent of the state of the market and develop the so-called VIX Heston model, which depends on the current and also on simulated implied volatilities. This approach, which we have named here the P Q approach, takes into account the Q measure information at time t = 0 and simulates risk-neutral model parameters (thus, future implied volatility surfaces are obtained by means of simulation) based on historically observed relations with some relevant market variables such as the VIX index.
As is well-known, the VIX index is a volatility measure for the S&P-500 index, which is calculated by the Chicago Board Options Exchange (CBOE) (see CBOE (2015)), and it is therefore directly linked to the implied volatility surface. Consequently, extracting information from the VIX index is frequently studied (see, e.g., Duan and Yeh (2012)) and our approach in this paper is based on the methodology presented in Singor et al. (2017), where the development of the Heston model parameters for the S&P-500 index options and the VIX index have been analysed.
The contribution of our present research is two-fold. First, we discuss the justification of using a risk-neutral model with time-dependent parameters. By means of a hedge test, we show that hedging strategies that take into account the changes in the implied volatility surface significantly outperform those strategies that do not, both in simulation and with empirical tests. This leads to the conclusion that the time-dependent risk-neutral measure can be used for the evaluation of future embedded option prices. Secondly, we show the impact of our new approach. For that, we use real data from 2007 to 2016 and compute the SCR on a monthly basis with a constant Q measure and also with the VIX Heston model where this assumption is relaxed. We conclude that the VIX Heston model predicts out-of-sample implied volatility surfaces accurately and computes more conservative and stable SCRs. The impact of using the new approach on the SCR depends on the initial state of the market and may vary from 46 % to + 52 % in our experiments. Moreover, we see that the SCR that is computed with the VIX Heston model is significantly less pro-cyclical, for example, it is lower in the wake of the 2008 credit crisis, as it incorporates the likely normalisation.
The outline of this paper is as follows. In Section 2, we give the definition of the SCR. In Section 3, we explain the dynamic VIX Heston model. In Section 4, we present the hedge tests with the corresponding results, followed by Section 5 where we present the numerical VIX Heston results and the impact of the using P Q dynamics on the SCR. Section 6 concludes.

2. Solvency Capital Requirement

Let us denote the policy’s net income in the interval [ 0 , t ] by A t , which is defined as the cash flows up to time t that are generated under the real-world market. The mathematical definition is given by
A t = 0 t e μ ( t s ) cashflow ( s , Δ s ) d s ,
where μ is the expected return and “ cashflow ( s , Δ s ) ” denotes the generated cash flow over the interval [ s , s + Δ s ] . Similarly, we define the policy’s liabilities by L t and they are given by the discounted expected cash flows under the risk-neutral measure in the interval [ t , T ] :
L t = E Q t t T e r ( s t ) cashflow ( s , Δ s ) d s F t ,
with risk-free rate r. Note that a positive/negative cash flow corresponds to an income/liability for the insurer. We define
N t : = A t L t ,
which can be thought of as the policy’s net value at time t. The Solvency Capital Requirement is now defined as the 99.5% Value-at-Risk (i.e., the 99.5% quantile) of the one-year loss distribution under the real-world measure, i.e.,
SCR = VaR 0.995 N 0 N ˜ 1 : = inf x P N 0 N ˜ 1 < x > 0.995 .
Here, N ˜ 1 is defined as the discounted value of N 1 .

Guaranteed Minimum Accumulation Benefit

This section is dedicated to deriving the fund dynamics and the SCR of a frequently used guarantee in the insurance industry, namely the Guaranteed Minimum Accumulation Benefit (GMAB) variable annuity rider.
We assume that the fund only contains stocks, but the derivation is similar when different assets are combined. We denote the stock and fund value by S t and F t , respectively, and define the initial premium by G. We assume the payout at maturity T is at least equal to the initial premium, in other words,
Payout T = max F T , G .
The dynamics of the fund are very similar to the stock dynamics, except for a fee α which is deducted from the fund as a payment to the insurer. This fee can be thought of as a dividend yield. Following Milevsky and Salisbury (2001), we obtain
d F t = d S t F t S t e α t , F 0 = G .
The specific dynamics depend on the assumptions regarding stock price S t . Here, we assume the Black–Scholes model (geometric Brownian motion, GBM) under the observed real-world measure P , which yields:
d F t P = ( μ α ) F t P d t + σ F t P d W t P , F 0 = G .
Moreover, for the valuation, a risk-neutral Heston model is implemented, leading to
d F t = ( r α ) F t d t + v t F t d W t F , F 0 = G , d v t = κ ( v ¯ v t ) d t + γ v t d W t v , d W t S , d W t v = ρ d t .
The income is generated by the accumulated fees, hence
A t = 0 t α F s P e μ ( t s ) d s .
The liabilities, on the other hand, depend on the final value of the fund, as follows,
  • If F T G , the policyholder receives F T and the insurer has no liabilities.
  • If F T < G , the policyholder receives G and the liabilities of the insurer are equal to G F T .
Moreover, the insurer continues to claim future fees, hence, according to Equation (2), we can write the liabilities as
L t = E Q t e r ( T t ) max G F T , 0 t T e r ( s t ) α F s d s F t = Put ( F t , G ) + F t e α ( T t ) 1 ,
where Put ( F t , G ) denotes the value of a European put option on the fund at time t with strike price G and dividend yield α . We can substitute these definitions into Equation (4) to obtain
SCR = VaR 0.995 N 0 e r N 1 = VaR 0.995 A 0 L 0 e r ( A 1 L 1 ) = e r VaR 0.995 Put ( F 1 , G ) g ( F 1 , 1 ) Put ( F 0 , G ) + g ( F 0 , 0 ) ,
with
g ( F t , t ) = 0 t α F s e μ ( t s ) d s + F t 1 e α ( T t ) ,
which can be thought of as the sum of the realized and expected fees. In this case, the SCR depends on the real-world distribution of F 1 , which determines g ( F 1 , 1 ) and also influences the risk-neutral valuation of Put ( F 1 , G ) .
Variable annuity riders require a risk-neutral valuation at (future) time t = 1 , as the liabilities are, by definition, conditional expectations under the risk-neutral measure. In the case of a GMAB rider, there is an analytic expression available, but often there is no such expression for L t . Hence, the evaluation of the conditional expectation typically requires an approximation. In that case, often the Least-Squares Monte Carlo algorithm is used to approximate the conditional expectations at t = 1 . The experiments presented in this paper were also performed for the Guaranteed Minimum Withdrawal Benefit (GMWB) variable annuity rider. As the results and conclusions were very similar as for the GMAB, in this paper, we restrict ourselves to the GMAB variable annuity.
The distributions of g ( F 1 , 1 ) and Put ( F 1 , G ) in Equation (12) can be obtained by means of a Monte Carlo simulation,
  • First, the net policy value N 0 is determined, according to definition Equation (3).
  • Thereafter, the fund value F t along with other explanatory variables is simulated according to the real-world measure up to time t = 1 .
  • Subsequently, the values of Put ( F 1 , G ) and g ( F 1 , 1 ) are evaluated for each trajectory. The value of g ( F 1 , 1 ) can be obtained directly from the trajectory of F t , however, Put ( F 1 , G ) requires a risk-neutral valuation for which the risk-neutral measure at time t = 1 is required.
  • Finally, the simulated values are combined to construct the loss distribution. The SCR corresponds to the 99.5% quantile of this distribution.
Some more detail about the Monte Carlo simulation is provided in Appendix A.

3. Dynamic Stochastic Volatility Model

When valuing options, one typically wishes to calibrate a risk-neutral model according to the market’s expectations, which are quantified by the implied volatility surface. This implied volatility surface can be used to extract European option prices for a wide range of maturities and strikes. The market expectation (and so the volatility surface), is however unknown at t = 1 and therefore practitioners typically use the implied volatility surface at t = 0 to calibrate the risk-neutral model parameters. In this standard Q in P approach, these parameters are assumed to be constant over time, i.e., the risk-neutral measure is independent of the real-world measure. Note that this approach is also used to compute risk measures from the Basel accords, such as credit value adjustment (CVA), capital valuation adjustment (KVA) and potential future exposure (PFE) (see Kenyon et al. (2015); Jain et al. (2016); Ruiz (2014)). However, regarding the computation of CVA, this quantity is typically hedged, and so using only the market expectation at t = 0 is sufficient.
In the P Q approach, we relax the assumption of independence. The calibrated risk-neutral model parameters are related to the simulated real-world scenarios.

3.1. Heston Model

We assume the Heston model as a benchmark. The Heston stochastic volatility model (Heston 1993) assumes the volatility of the stock price process to be driven by a CIR model, i.e., under the Q measure,
d S t = r S t d t + v t S t d W t 1 , S ( 0 ) = S 0 , d v t = κ ( v ¯ v t ) + γ v t d W t 2 , v ( 0 ) = v 0 d W t 1 , d W t 2 = ρ d t .
where r denotes the risk-free rate, κ the speed of mean-reversion, v ¯ the long-term variance, γ the volatility of variance and ρ the correlation between asset price and variance. The risk-free rate is assumed to be constant throughout this research. To calibrate these parameters according to the market’s expectations, one wishes to minimize the distance between the model’s and market’s implied volatilities. For the Heston model, consider the following search space for the parameters
Ω Search = D κ × D v 0 × D v ¯ × D γ × D ρ = [ 0 , 10 ] × [ 0 , 1 ] × [ 0 , 1 ] × [ 0 , 2 ] × [ 1 , 1 ] ,
where D p denotes the search domain for parameter p. Using this search space, one is able to find the calibrated parameters at time t by minimizing the sum of squared errors:
Ω t Heston = arg min Ω Ω Search K T σ Market ( t , K , T ) σ Heston ( t , Ω , K , T ) 2 .
The set of parameters Ω t Heston = { κ , v 0 , v ¯ , γ , ρ } minimizing this expression is considered risk-neutral and reflects the market’s expectations.
When the market is subject to changes, its expectations will change accordingly. Hence, the implied volatility surface will evolve dynamically over time. Consequently, the Heston parameters may change over time. Figure 1 shows the monthly evolution of the Heston parameters from January 2006 to February 2017.
During this calibration procedure, we assumed κ to be constant; an unrestricted κ led to unstable results and did not significantly improve the accuracy. The other parameters, however, do not appear constant over time. This may give rise to issues in risk-management applications, where one simulates many real-world paths to assess the sensitivity to the market of a portfolio, balance sheet, etc. In many cases, a risk-neutral valuation is required, which is nested inside the real-world simulation, for example, when the portfolio or balance sheet contains options. Consequently, the future implied volatility surface for each trajectory needs to be known, such that the Heston parameters can be calibrated accordingly. Modelling the implied volatility surface over time is a challenging task (see, e.g., Cont et al. (2002); Mixon (2002); Audrino and Colangelo (2010)), as it quantifies the market’s expectations that depend on many factors. Moreover, even if the implied volatility surface is modelled, one would still need to perform a costly calibration procedure. Performing this calibration for each of the simulated trajectories would require a significant computational effort. As the Heston model is a parameterization of the implied volatility surface, an attractive alternative is to simulate the Heston parameters directly.
Figure 1 clearly shows that the parameters are time-dependent, which is in contrast with the assumptions of the plain Heston model. In this research we first search for relations between the risk-neutral parameters and observe real-world market indices, such as the VIX index. When a relation is found, the future risk-neutral parameters can be extracted from a real-world simulation. In this way, by performing a real-world simulation, we can directly forecast the set of Heston parameters within each simulated trajectory. The risk-neutral measure is then conditioned on the simulated state of the market, without the need of a simulated implied volatility surface.

3.2. VIX Heston Model

We have already described the difficulties of modelling the implied volatility surface, or equivalently, the option prices. A different approach is therefore required to calibrate the Heston parameters in simulated markets. The simulated parameter sets should accurately reflect the expectations of the simulated market, for example by linking the dynamics of the Heston parameters to the dynamics of the market. In Singor et al. (2017), an approach is considered which is based on the assumptions of a linear relationship between the VIX index and the Heston parameters. After analysis, it was concluded that:
  • The initial volatility v 0 , t and the volatility of the volatility γ t are highly correlated to the VIX index, with correlation coefficients of 0.99 and 0.76, respectively.
  • The long term volatility v ¯ t appears to be correlated to the VIX index trend line (estimated by a Kalman filter) with a correlation coefficient of 0.74.
To this end, the following restrictions are imposed on the Heston model parameters:
Ω t Heston ( X ) = κ t = κ , κ R + , v 0 , t = a v 0 · VIX t + b v 0 2 , a v 0 , b v 0 R , v ¯ t = a v ¯ · VIX filter t + b v ¯ 2 , a v ¯ , b v ¯ R , γ t = a γ · VIX t + b γ , a γ , b γ R , ρ t = ρ , ρ [ 1 , 1 ] ,
where both the speed of mean reversion κ t and the correlation coefficient ρ t are assumed to be constant over time. The constant ρ assumption is justified by the fact that ρ displays a mean reverting pattern and it can therefore be approximated by its long-term mean. The constant κ assumption is justified by observations in Gauthier and Rivaille (2009). They argued that the effect on the implied volatility surface of increasing κ is similar to decreasing γ . Thus, allowing κ to change over time unnecessarily overcomplicates the model. Moreover, numerical experiments show that an unrestricted κ sometimes leads to unstable results.
The purpose of the restrictions is to accurately reflect the market’s expectations. To this end, we wish to minimize the distance between the observed and predicted implied volatility surfaces. Therefore, we calibrate the constraint parameters with a procedure similar to Equation (15). By changing the parameter set from Ω t Heston to X = { κ , α v 0 , b v 0 , a v ¯ , b v ¯ , a γ , b γ , ρ } and summing over all points in time one obtains
X = arg min X S X Search t , K , T σ Market ( t , K , T ) σ Heston ( t , Ω t Heston ( X S ) , K , T ) 2 ,
with,
X Search = D κ × D a v 0 × D b v 0 × D a v ¯ × D b v ¯ × D a γ × D b γ × D ρ = { 1 } × R 6 × [ 1 , 1 ] .
By including the VIX-index in the real-world simulation, one is able to efficiently evaluate the set of Heston parameters in line with the simulated state of the market. For more information regarding the derivation and properties of the VIX Heston model, we refer the reader to Singor et al. (2017).
It is, however, important to stress the different assumptions in the real-world and risk-neutral markets. Risk-neutral valuations are performed under the Heston model, which assumes constant parameters. However, in real-world simulations, we assume the Heston parameters to change over time, according to the simulated state of the market, similar to Figure 1. One could argue that this approach is invalid, since we are violating the assumptions of the risk-neutral market. To this end, we discuss a justification of this approach, by means of a hedge test. Moreover, to assess the impact of time-dependent Heston parameters, we implement the VIX Heston model as proposed in Singor et al. (2017) in a risk-management application: the Solvency Capital Requirement.

4. Hedge Test

Before implementing the dynamic risk-neutral measure (the VIX Heston model) in a risk-management application, we first test its applicability from a theoretical point of view. For example, in theory, one should be able to hedge against future positions using today’s implied volatility surface. This no longer applies when one assumes a risk-neutral measure that changes over time. To this end, we perform an experimental hedge test that determines which approach is more accurate in terms of future option prices, a dynamic or constant risk-neutral measure.
The plain Heston model assumes v ¯ and γ to be constant, hence, from a theoretical point of view, it would be redundant to hedge against changes of these parameters. However, due to the dynamic behaviour of the implied volatility surface, v ¯ and γ will change over time (see Figure 1). Thus, from an empirical point of view, the option price dynamics are subject to changes of these parameters. To support this claim, we compare three different hedging strategies. The first strategy is the classical Delta-Vega hedge, which does not take any changes of v ¯ and γ into account. The replicating portfolio aims at hedging an option “A”, with value C t , by holding a certain amount of stocks and a different option with value C ˜ t (which is called option “B” from this point onwards), i.e.,
Π t = C t + Δ ( 1 ) ( t ) S t + Δ ( 2 ) ( t ) C ˜ t + B t , Π 0 = 0 ,
where B t denotes the risk-free asset (for example, a bank account or a government bond), which grows with constant risk-free rate r. Note that Option B depends on the same underlying market factors as Option A. Following Bakshi et al. (1997), we impose the so-called minimized variance constraints,
d Π t , d S t = 0 , d Π t , d W t v = 0 ,
where · , · refers to the covariation between the two processes and W t v is the Brownian motion which is independent from S t , driving random changes in the volatility. By imposing these constraints, one obtains a portfolio that has no covariation with the underlying asset and its volatility. In other words, changes in the asset’s value and changes in the asset’s volatility will have neither direct nor indirect (through correlations) effect on the portfolio. These constraints give us the following hedge ratios,
Δ ( 1 ) ( t ) = C S t Δ ( 2 ) ( t ) C ˜ S t , Δ ( 2 ) ( t ) = C / v t C ˜ / v t .
Under the assumptions of the Heston model, the portfolio dynamics are given by,
d Π t = d t C t 1 2 v t S t 2 2 C S t 2 1 2 γ 2 v t 2 C v t 2 ρ γ v t S t 2 C S t v t + r B t + Δ ( 2 ) ( t ) C t + 1 2 v t S t 2 2 C S t 2 + 1 2 γ 2 v t 2 C v t 2 + ρ γ v t S t 2 C S t v t ,
Note that the random components have disappeared from the portfolio. Thus, the portfolio should be insensitive to changes in the market, if it respects the assumptions of the Heston model.
Secondly, we assume a model which is similar to the classical Delta-Vega hedge, but with adjusted hedge ratios. We call this strategy the adjusted Delta-Vega hedge. Assuming a dynamic model (see Appendix B for more details), we can apply Ito’s lemma to obtain
d C t Dynamic = C t d t + p t C p t d p t + 1 2 p t q t 2 C p t q t d p t , d q t ,
with p t , q t { S t , v t , v ¯ t , γ t } . For notational purposes, we rewrite this expression as
d C t Dynamic = c 1 d t + c 2 d W t S + c 3 d W t v + c 4 d W t v ¯ + c 5 d W t γ ,
where the coefficients are defined as
c 1 = C t + r S t C S t + κ ( v ¯ t v t ) C v t + κ v ¯ ( v ¯ Mean v ¯ t ) C v ¯ t + κ γ ( γ Mean γ t ) C γ t + 1 2 p t q t 2 C p t q t d p t , d q t , c 2 = v t S t C S t + ρ γ t v t C v t + ρ ρ v ¯ a v ¯ v ¯ t C v ¯ t + ρ ρ γ a γ γ t C γ t , c 3 = γ t v t ( 1 ρ 2 ) C v t + ρ v ¯ a v ¯ v ¯ t 1 ρ 2 C v ¯ t + ρ γ a γ γ t 1 ρ 2 C γ t , c 4 = a v ¯ v ¯ t 1 ρ v ¯ 2 C v ¯ t , c 5 = a γ γ t 1 ρ γ 2 C γ t .
The hedge ratios now take the correlated components of v ¯ and γ into account, due to the minimized variance constraints of Equation (20). Using Equation (24), the hedge ratios can be derived, giving
Δ Adjusted ( 1 ) ( t ) = c 2 v t S t c ˜ 2 c 3 v t S t c ˜ 3 , Δ Adjusted ( 2 ) ( t ) = c 3 c ˜ 3 ,
where c ˜ 2 and c ˜ 3 are defined as in Equation (25) for Option B. The additional stochastic variables v ¯ t and γ t follow mean reverting processes (see Equation (A1) in Appendix B), such that the portfolio dynamics are found to be,
d Π t = c 1 + c 3 c ˜ 3 c ˜ 1 + c ˜ 2 r v t c ˜ 2 c 3 r v t c ˜ 3 + r B t d t + c 3 c ˜ 3 c ˜ 4 c 4 d W t v ¯ + c 3 c ˜ 3 c ˜ 5 c 5 d W t γ .
The portfolio still depends on the randomness associated with v ¯ and γ . However, the randomness associated with S t and v t have disappeared, including the random components of v ¯ and γ that are correlated to S t and v t .
We also consider a strategy that aims at completely hedging against any changes of v ¯ and γ , by introducing two additional options,
Π t = C t + Δ Full ( 1 ) ( t ) S t + Δ Full ( 2 ) ( t ) C ˜ t + Δ Full ( 3 ) ( t ) C ¯ t + Δ Full ( 4 ) ( t ) C ^ t . Π 0 = 0 .
Again, all options depend on the same underlying market factors, but they have different contract details. In this case, we require to be protected against any changes of S t , v t , v ¯ t and γ t , hence we impose
d Π t , d S t = 0 , d Π t , d W t v = 0 , d Π t , d W t v ¯ = 0 , d Π t , d W t γ = 0 .
Substituting these constraints leads to a system of equations, which is solved by
Δ Full ( 1 ) ( t ) Δ Full ( 2 ) ( t ) Δ Full ( 3 ) ( t ) Δ Full ( 4 ) ( t ) = v t S t c ˜ 2 c ¯ 2 c ^ 2 0 c ˜ 3 c ¯ 3 c ^ 3 0 c ˜ 4 c ¯ 4 c ^ 4 0 c ˜ 5 c ¯ 5 c ^ 5 1 c 2 c 3 c 4 c 5 .
By imposing Equation (29), one removes all randomness associated with S t , v t , v ¯ t and γ t . Hence, the portfolio dynamics only depend on deterministic changes under the assumed market dynamics. From this point onwards, we refer to this strategy as the full hedge strategy.

4.1. Hedge Test Experiments

In this section, we discuss the results of the hedge test which was explained above. This test experiment indicates which approach is more accurate when evaluating future option prices, a measure which assumes v ¯ and γ to be constant or a measure which assumes v ¯ and γ to change over time. We thus consider the classical approach, with constant v ¯ and γ , and two approaches that assume v ¯ and γ to change over time. The different hedging strategies are tested in a simulated market as well as in an empirical market.
The empirical hedge test is based on daily implied volatility surfaces of the S&P-500 European put and call options from January 2006 to February 2014.

4.1.1. Simulated Market

First, we evaluate the hedging strategies in a controlled environment. In this test, we assume S t and v t to follow the Heston model, with the key difference that v ¯ and γ are time-dependent mean reverting processes (as in Equation (A1) in Appendix B). This set-up will be informative, as it is in line with historical observations. The processes of S t and v t are discretized by the Quadratic Exponential (QE) scheme, as proposed in Andersen (2008). Furthermore, the processes of v ¯ and γ are discretized by the Milstein scheme, while also taking the correlations with v t into account. The details of this simulation scheme can be found in Appendix B.
The performance of the classical Heston Delta-Vega, the adjusted Heston Delta-Vega and the full hedge is compared under the dynamics of this market. The strategies all aim at hedging a short position in a European call option with maturity T = 1.0 years and strike K = 50 . Moreover, at the start date of the option, t = 0 , we assume
S 0 = 49 , v 0 = 0.05 , v ¯ 0 = 0.1 , γ 0 = 0.7 , κ = 1.0 , ρ = 0.75 , r = 0.01 .
The adjusted Delta-Vega and full hedge involve additional options. These options depend on the same stock as Option A, but the contract details are different, i.e.,
K ˜ = 50.0 , T ˜ = 2.0 , K ¯ = 50.0 , T ¯ = 3.0 , K ^ = 50.0 , T ^ = 4.0 .
Moreover, we assume the following parameters in the dynamics of the v ¯ and γ processes,
κ v ¯ = 1.4 , v ¯ Mean = 0.1 , a v ¯ = 0.8 , ρ v ¯ = 0.9 , κ γ = 2.1 , γ Mean = 0.7 , a γ = 1.0 , ρ γ = 0.9 .
In reality, these parameters cannot be freely chosen, as they are implied by the market. By analysing the historical behaviour of v ¯ t and γ t , one is able to estimate these SDE parameters. In this case, however, we assume to know these parameters and use them when determining the hedge strategy.
Ideally, the portfolio value should be equal to zero for each point in time, because the initial portfolio value is equal to zero. Every deviation from zero is thought of as a hedge error. Over the entire life of the option we desire the mean and standard deviation of this error to be as close as possible to zero. To this end, we introduce the following error measures for simulation j,
E Mean ( j ) = 1 N + 1 i = 0 N Π i N T ( j ) , E Std ( j ) = 1 N i = 0 N Π i N T ( j ) E Mean ( j ) 2 .
The overall hedging performance of the M simulations can be judged by
E ¯ Mean = 1 M j = 1 M E Mean ( j ) , E ¯ Std = 1 M j = 1 M E Std ( j ) .
In the current set-up, these error measures are random variables, as they are determined by means of Monte Carlo simulations. To this end, we analyse the stability of the error measures across the simulated trajectories by the standard error,
SE ( E ) = 1 M 1 j = 1 M E ¯ E ( j ) 2 M ,
with error measure E, its mean E ¯ and the simulated trajectories E ( j ) .
The performance of the strategies under these assumptions based on M = 200 simulations is given in Table 1.
These results show that, in this experiment, it is beneficial to take parameter correlations into account when hedging, in terms of both the mean error and the standard deviation. While still not perfect, the dynamic Heston Delta-Vega hedge is better able to remain risk-neutral on average and it deviates less from this average. The full hedge performs even better with a mean approximately equal to zero and a standard deviation equal to or lower than any of the previous strategies, despite the dynamic behaviour of the market.
The purpose of these hedging strategies is to replicate the value of an option. In the case of the classical Delta-Vega hedge, only S t and v t are allowed to change. By respecting the assumptions of the Heston model, we are not fully able to replicate the future option values. On the other hand, when assuming the adjusted Heston model, v ¯ t and γ t are allowed to change as well. Hedging strategies considering this dynamic behaviour, produce more accurate future option price estimates in the current set-up. This indicates the importance of taking dynamic parameters into account when determining future option prices, even when the assumptions of the underlying model are violated. However, note that the comparison is not completely fair, since we specifically assume a Heston market with time-dependent v ¯ and γ . It can therefore be expected that a strategy taking these assumptions into account outperforms one that does not. Therefore, to better assess the true performance of these strategies, we also perform an empirical test.

4.1.2. Empirical Market

When hedging in practice, the underlying assumptions are not always respected and the true parameters are, of course, unknown. To quantify the effect of these difficulties, we test the hedging strategies on historical data in this section. All hedging ratios depend on the dynamic risk-neutral parameters, hence v t , v ¯ , γ and ρ vary over time, according to the changes in the historically observed implied volatility surfaces. Moreover, the adjusted Heston Delta-Vega hedge depends on the correlation and volatility of v ¯ and γ , which we define as follows
ρ v ¯ = ρ γ = 0.95 , a v ¯ = 1 N i = N 1 log v ¯ t i + 1 v ¯ t i 2 Δ t , a γ = 1 N i = N 1 log γ t i + 1 γ t i 2 Δ t .
It can be quite challenging to determine the correlation between the Brownian motions driving the parameters, hence we assume it to be equal for each time-interval. Moreover, the volatility estimator only depends on past observations; the sum indices vary from N to 1 with t N = 1 year, where t 0 = 0 indicates the starting date of the option.
In this test, we hedge an at-the-money European call option with one year maturity on the S&P-500 index. All hedging strategies are subjected to daily rebalances that are based on the parameters as seen on that date. The transaction costs are excluded from this test, as we are interested in the performance of the hedging strategies with respect to changes in v ¯ and γ . Including transaction costs would increase the costs of the full dynamic Heston hedging strategy, as it involves more financial assets. This would bias the results and it is therefore best to exclude the transaction costs from the present test.
The test is repeated on a monthly basis from July 2006 to February 2013 and the performance is assessed by the mean error and mean squared error during the life of the option,
E Mean ( j ) = 1 N + 1 i = 0 N Π i N T ( j ) , E MSE ( j ) = 1 N + 1 i = 0 N Π i N T ( j ) 2 .
The time intervals of the hedging portfolios overlap in this set-up, since the test is repeated on a monthly basis and the option maturity is one year. However, all strategies depend on different initial conditions and therefore perform differently, despite the overlapping time-intervals. The results of this test are graphically presented in Figure 2.
The hedging performances are similar to the simulation results:
  • The classical Delta-Vega hedge does not take changes of the parameters into account and appears to be the most unstable method. This strategy has the most and highest error “peaks” and is therefore most unreliable.
  • The adjusted Delta-Vega hedge is still not perfect but appears to be more stable than the classical Delta-Vega hedge, the error “peaks” happen less frequently and are less pronounced. The error can be minimized by optimizing the correlation and volatility of v ¯ t and γ t , but the optimization can only be performed afterwards, which is not the objective of this test.
  • The full hedge is the most stable out of the three strategies. It does not have any error “peaks” and outperforms the other two strategies in most cases. Moreover, this strategy does not depend on additional parameters which may introduce an error if chosen poorly, such as in the dynamic Heston Delta-Vega hedge.
We can conclude that respecting the assumptions of the underlying model (in this case, the Heston model) does not necessarily lead to more accurate future option prices. By taking changes of the v ¯ and γ parameters into account, we are able to replicate option values more accurately both in a controlled (simulation) and uncontrolled (empirical) environment.

5. VIX Heston Model Results

5.1. Data and Calibration

The dataset contains monthly implied volatility surfaces of the S&P-500 European put and call options from January 2006 to February 2017. Each implied volatility surface contains five different strike levels (80%, 90%, 100%, 110% and 120% of S 0 ) and maturities (0.25, 0.5, 1, 1.5 and 2 years). We split the dataset into a training set and a test set. The training set consists of implied volatility surfaces from January 2006 to February 2014 and is only used to identify the optimal regression components. The test set contains monthly implied volatility surfaces from March 2014 to February 2017 and is used to assess the accuracy of the VIX Heston model.
Furthermore, to assess the robustness of the VIX Heston model, we apply it to monthly implied volatility surfaces of the FTSE-100 (United Kingdom) and STOXX-50 (Europe) as well. The training set includes data from October 2010 to June 2015 and the test set contains data from July 2015 to February 2017.
To assess the accuracy of the VIX Heston model, one must first calibrate the model according to Equation (17). Using the US training set described above, we obtain
Ω t Heston ( X ) = κ t = 1.0 , v 0 , t = 0.0140 + 0.0090 · VIX t 2 , v ¯ t = 0.0957 + 0.0087 · VIX filter t 2 , γ t = 9.6479 · 10 5 + 0.0270 · VIX t , ρ t = 0.7294 .
The calibrated parameters of the UK and Europe datasets can be found in Appendix C. The accuracy is assessed by comparing the predicted to the observed implied volatility surfaces of the test set, according to the following error measures,
SSE = t , K , T σ Market ( t , K , T ) σ Heston ( t , Ω t Heston , K , T ) 2 , MAE = 1 N σ t , K , T σ Market ( t , K , T ) σ Heston ( t , Ω t Heston , K , T ) , R 2 = 1 SSE t , K , T σ Market ( t , K , T ) σ ¯ Market 2 , R Min 2 = min t R t 2 : t [ t min , t max ] ,
where Ω t Heston is defined as the predicted parameter set, N σ as the total number of observed implied volatilities and σ ¯ Market as the average of all observed implied volatilities. The corresponding results are displayed in Table 2. The predicted paths for the Heston parameters of the US dataset can be found in Figure 3. The predicted paths of the Heston parameters of the UK and Europe dataset are graphically presented in Appendix C.
The regression model loses some accuracy compared to the unrestricted model. On average, there is an error of 0.003 between the implied volatility and the unrestricted Heston model in the US dataset, which is approximately equal to an error of 2.2%. The VIX Heston model has an average absolute error of 0.012, which corresponds to a 7.7% error. Thus, by implementing the regression models, we introduce an additional error of 5.5%, on average. The accuracy of the VIX Heston model in the UK and Europe datasets is even higher, where the accuracy loss is equal to 2.9% and 1.2%, respectively.
Finally, we discuss the predictions obtained in dependence of parameter γ . In the US dataset, the prediction of γ is relatively inaccurate (see Figure 3), as the out-of-sample correlation to the VIX index is much lower than the in-sample correlation (0.86 in-sample versus 0.32 out-of-sample). In the UK and Europe datasets, this phenomenon does not seem to be present and consequently the predictions of γ are much more accurate (see Figure A1 and Figure A2 in Appendix C). This also explains why the implied volatility surface predictions in the UK and Europe datasets are more accurate than the US predictions, as can be seen in Table 2. Thus, there appears to be another, yet unknown, factor driving γ in the US dataset, which is absent in the UK and Europe datasets. Analysis of the cause of this phenomenon might be a topic for future study. However, in this research, we assume that the VIX Heston model is sufficiently accurate to describe the dynamic behaviour of the Heston parameters, as γ only has a minor effect on the implied volatility surface.

5.2. SCR Impact Study

In this test, we assess the impact of assuming time-dependent risk-neutral parameters, according to three scenarios. Each scenario corresponds to a different initial market and consequently the initial expected liabilities, L 0 , will differ. We assume a no-arbitrage fee, i.e., a fee for which L 0 is equal to zero. The general contract details of the variable annuity can be found in Table 3 and the initial values of the different scenarios accompanied with the fair premiums are presented in Table 4.
We assume the fund value to follow a GBM with initial value F 0 = 1000 , so the fund value follows Equation (7). The VIX index is modelled simultaneously, following a mean-reverting path
d vix t = κ vix ( vix Mean vix t ) + γ vix vix t λ vix ρ vix d W t S + 1 ρ vix 2 d W t vix , VIX = 100 · vix .
The process parameters are estimated with the generalized Method of Moments (see Hansen (1982)),
σ = 0.21 , μ = 0.05 , κ vix = 4.964 , vix Mean = 0.207 , γ vix = 1.859 , λ vix = 1.271 .
Moreover, we set ρ vix = 0.75 , which is in line with observations in the risk-neutral market. The SDEs are discretized by the Milstein scheme.

5.2.1. Guaranteed Minimum Accumulation Benefit

We have simulated 100,000 real-world trajectories and evaluated the loss function as defined in Equation (4) under the two risk-neutral measures (with either constant or time-dependent parameters). This way, we are able to construct and compare the probability density functions under these measures. This process is repeated for the three different scenarios that are presented in Table 3. In Figure 4, we have graphically represented the impact on the probability density function of the loss distribution for the different scenarios. Moreover, the Solvency Capital Requirements associated with these distributions can be found in Table 5.
The impact on the probability density functions and the Solvency Capital Requirements is substantial and we wish to highlight a few noteworthy features.
The loss distribution under the original risk-neutral measure appears to be centred around 0, independent of the initial conditions. The one-year loss is defined as the difference between the policy value at the times t = 0 and t = 1 . On average, the policy value will not change significantly if the risk-neutral parameters stay the same. Therefore, the loss distribution must be centred around 0, as long as the initial risk-neutral parameters do not change. The loss distribution under the time-dependent risk-neutral measure, on the other hand, heavily depends on the risk-neutral parameters at t = 0 . Consider Scenario 2 for example. Initially, the volatility, v ¯ and γ are relatively low, resulting in low initial expected liabilities. However, according to the mean-reverting VIX index, these parameters are more likely to increase over time and with them, the expected liabilities. Consequently, the one-year losses are much higher compared to those under the original risk-neutral measure, which still assumes the relatively low initial parameters at t = 1 . In Figure 4, this effect is clearly visible where the loss function is shifted to the right. Conversely, the one-year losses under the time-dependent risk-neutral measure in Scenario 3 are much lower, as the expected liabilities are more likely to decrease. In conclusion, when the initial volatility is low (high), we can expect a higher (lower) SCR under the time-dependent risk-neutral measure. In the 2008 credit crisis, this resulted in a higher SCR before the crisis and a lower SCR during the crisis.
Besides the shifted mean, the loss distribution under the time-dependent risk-neutral measure also tends to have heavier tails, which is especially visible in Scenario 1. This is caused by the fact that v ¯ and γ depend on the state of the market, which results in more extreme losses (or gains). If, for example, the market crashes, v ¯ and γ are likely to increase. This will generate even higher expected liabilities, resulting in even higher losses. However, if the market flourishes, v ¯ and γ tend to be much lower, leading to lower expected liabilities and lower losses (or higher gains). This feature is present in all scenarios of Figure 4, but is best visible in Scenario 1, where the probability of an extreme loss as well as the probability of an extreme gain is higher under the time-dependent risk-neutral measure. Consequently, the SCRs under the two risk-neutral measures are not necessarily equal, not even when the initial conditions are equal to the average market conditions (such as in Scenario 1).
To give a broad overview of the impact, we determine the SCR of a variable annuity with the GMAB rider for multiple points in time. The contract details presented in Table 3 remain unchanged, but the initial parameters depend on historical data. For computational purposes, the parameter α is assumed to be constant and equal to 0.01. In this test, we compare four different risk-neutral measures:
  • A time-dependent risk-neutral measure where all parameters depend on the simulated state of the market.
  • A risk-neutral measure where F 1 and v 1 depend on the simulated market and the risk-neutral parameters are equal to the parameters as observed on t = 0 . This measure is equivalent to the original risk-neutral measure that we have previously defined.
  • A risk-neutral measure where F 1 and v 1 depend on the simulated market and the risk-neutral parameters are equal to the parameters as observed on t = 1 . We refer to this measure as the future risk-neutral measure.
  • A risk-neutral measure where F 1 and v 1 depend on the simulated market and the risk-neutral parameters are equal to the realized regression model predictions at t = 1 of Figure 3. This measure is different from the time-dependent risk-neutral measure, as it depends on the realized state VIX, instead of the simulated VIX. We refer to this measure as the future VIX risk-neutral measure.
Thus far, we have applied the first two risk-neutral measures in our analysis. The latter two measures can only be applied on historical data (otherwise, the observed parameters at t = 1 are undefined) and are merely added for explanatory purposes. Ideally, the SCRs under the future and the future VIX risk-neutral measure are equal. The difference between these measures is caused by prediction errors of the regression model. Hence, the difference between the SCRs under these measures be is indication for the accuracy of the regression models.
In Figure 5, the results under the different risk-neutral measures are displayed. The difference between the original and time-dependent risk-neutral measure is also summarized in Table 6.

5.2.2. Discussion on Impact

The impact on the SCR is significant, with a maximum absolute difference of over 50%. Moreover, the difference appears to be structural over time, with a mean absolute difference of almost 30%. The results in Figure 5 also contain some stylized facts that we have already seen in Figure 4 and we discuss them by distinguishing different time periods:
  • 2005–2007: The volatility in these years was relatively low and this translates to a somewhat higher SCR under the original measure and an even higher SCR under the time-dependent measure, which is similar to Scenario 2. The SCR under the future measure is rising, due to higher expected liabilities at t = 1 , indicating that more volatile times are coming.
  • Early 2008: The market has not crashed yet, but volatility is starting to increase, resulting in a smaller difference between the time-dependent and original measures, which is comparable to Scenario 1. The future measure, however, takes the fact that the market will crash into account. Hence, the SCR is the highest under the future measure.
  • Late 2008–2012: During these years, several spikes occurred in the implied volatility surface, which will increase the initial liabilities and therefore the expected losses will decrease, analogously to Scenario 3. This results in lower SCRs during this period. Moreover, we see that the SCRs under the future risk-neutral measure are lowest during these highly volatile periods, since this measure depends on the realized market at t = 1 , which has returned to its less volatile state. Consequently, the expected liabilities at t = 1 and the SCR are the lowest under the future risk-neutral measure.
  • 2013–2017: This period is comparable to 2005–2007, apart from the fact that the volatility is approximately constant throughout these years. This translates into an almost equal SCR prediction under the original and future risk-neutral measures.
The difference between the future and future VIX measures is small. This means that the expected liabilities at t = 1 are almost equal under both measures, indicating the accuracy of the regression model, at least for the realized states of the market. Under the assumption that the simulated markets behave similar to historical observations, this means that expected liabilities at t = 1 under the time-dependent measure will be in line with the simulated states of the market.
Finally, we also mention that we performed a very similar study for the GMWB product. Essentially, the same results were obtained in that case, however, the impact was somewhat less severe.

6. Conclusions

The research in this paper was motivated by the open question of how to value future guarantees that are issued by insurance companies. The future value of these guarantees is essential for regulatory and Asset Liability Management purposes. The complexity of the valuation is found in the fact that, first, these guarantees involve optionalities and thus need to be valued using the risk-neutral measure; and, second, whereas this measure is well-defined at t = 0 , the future risk-neutral measure, at future time t = 1 , is debatable.
For a large part, the liabilities evolve according to real-world models and, therefore, the future values of these guarantees need to be computed conditionally on the real-world scenarios. In this paper, we demonstrate the benefits of option valuation under a new, so-called P Q measure in Asset Liability Management. This is done by modelling the Heston model parameters, which form the parameterization of the implied volatility surface, conditional on the real-world scenarios.
Basically, we advocate the use of dynamic risk-neutral parameters in the cases in which we need to evaluate asset prices under the P measure, before an option value is required at a future time point. It means that the development of the real-world asset paths in the future are taken into account in the option valuation.
A hedge test was implemented for an academic test case, where the dynamic strategy outperformed the strategy with static parameters. Importantly, the results from this hedge test case were confirmed by a hedge test based on 12 years of empirical, historical data. Several conclusions have already been given after each of the structured test experiments is presented.
The results obtained by the strategy for the Solvency Capital Requirement of the variable annuities exhibited differences of even 50%, as compared to the conventional risk-neutral pricing of these annuities. Next to that, we saw that the SCR was significantly less pro-cyclical under the new approach, which is a highly desired feature.

Author Contributions

M.T.P.D., Data Curation; C.S.L.G., Formal Analysis; C.W.O., Methodology.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank Pieter Kloek for helpful discussions on option valuation in the context of the SCR.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Least-Squares Monte Carlo Method

In this section, we briefly describe the numerical techniques employed, which are based on the well-known least-squares Monte Carlo method.
The Least-Squares Monte Carlo method was first proposed by Longstaff and Schwartz (2001) for the valuation of American options. However, Bauer et al. (2010) were the first to implement it in an SCR context, to the best of our knowledge. The main purpose of the Least-Squares Monte Carlo algorithm is to reduce the number of inner simulations, possibly even to one path. In the first phase, a regression function is constructed using these inner estimates. The accuracy of the inner estimates is drastically reduced by reducing the number of inner simulations, but, by combining the results of all outer simulations, the inner errors cancel out. In the second phase of the algorithm, this regression function is used to evaluate the conditional expectation at t = 1 , without the need for inner simulations. For more details regarding the Least-Squares Monte Carlo algorithm, we refer the reader to Bauer et al. (2010).

Appendix B. Dynamic Heston Model

Based on the methodologies of Alexander et al. (2009), we assume v ¯ and γ in Equation (13) to be stochastic in the hedge test, i.e.,
d v ¯ t = κ v ¯ ( v ¯ Mean v ¯ t ) d t + a v ¯ v ¯ t ρ v ¯ d W t 2 + 1 ρ v ¯ 2 d W t v ¯ , d γ t = κ γ ( γ Mean γ t ) d t + a γ γ t ρ γ d W t 2 + 1 ρ γ 2 d W t γ ,
with the speed of mean reversion parameters κ v ¯ and κ γ , long-run averages v ¯ Mean and γ Mean , volatilities a v ¯ and a γ and correlations ρ v ¯ and ρ γ . Moreover, W t v ¯ and W t γ are defined as independent Brownian motions. Note that, with the parameters ρ v ¯ and ρ γ close to 1, a high correlation with the volatility process is indicated, which is expected based on historical data. Under these assumptions, the option price is driven by the changes in v ¯ and γ as well, giving
C t Dynamic C t , S t , v t , r , v ¯ t , γ t , κ , ρ , K , T .
Next, we give details about the discretization of the dynamic Heston model. We can rewrite the dynamic Heston model with time-dependent v ¯ and γ as follows
d X t = ( r 1 2 v t ) d t + v t ρ d W t v + 1 ρ 2 d W t S , d v t = κ ( v ¯ t v t ) d t + γ t v t d W t v , d v ¯ t = κ v ¯ ( v ¯ Mean v ¯ t ) d t + a v ¯ v ¯ t ρ v ¯ d W t v + 1 ρ v ¯ 2 d W t v ¯ , d γ t = κ γ ( γ Mean γ t ) d t + a γ γ t ρ γ d W t v + 1 ρ γ 2 d W t γ ,
with X t = log ( S t ) . We can simulate v t and S t with the Quadratic Exponential scheme proposed in Andersen (2008), with as minor difference that v ¯ and γ are different in each time-step. The next step is to simulate v ¯ t and γ t , such that they are correlated to v t . First, we discretize the processes
v t + Δ t v t + κ v ¯ t v t + v t Δ t 2 Δ t + γ t v t + v t Δ t 2 Δ t Z v , v ¯ t + Δ t v ¯ t + κ ( v ¯ Mean v ¯ t ) Δ t + a v ¯ ρ v ¯ v ¯ t Δ t Z v + a v ¯ v ¯ t Δ t ( 1 ρ v ¯ ) 2 Z v ¯ + 1 2 a v ¯ 2 v ¯ t ( Z v ¯ ) 2 d t , γ t + Δ t γ t + κ ( γ Mean γ t ) Δ t + a γ ρ γ γ t Δ t Z v + a γ γ t Δ t ( 1 ρ γ 2 ) Z γ + 1 2 a γ 2 γ t ( Z γ ) 2 d t ,
where Z v , Z v ¯ and Z γ are independent standard normal distributed random variables. Now, we are able to derive an approximation for Z v , given v t + Δ t ,
Δ t Z v 1 γ t v t + v t + Δ t 2 v t + Δ t v t κ v ¯ t v t + v t + Δ t 2 Δ t .
This approximation can be substituted into Equation (A4), which ensures the correlation between v t , v ¯ t and γ t .

Appendix C. VIX Heston: UK and Europe

Appendix C.1. Calibrated Parameters

Appendix C.1.1. Parameters obtained from UK data

Ω t Heston ( X ) = κ t = 1.0 , v 0 , t = 0.0014 + 0.0096 · VFTSE t 2 , v ¯ t = 0.0590 + 0.0110 · VFTSE filter t 2 , γ t = 0.2556 + 0.0206 · VFTSE t , ρ t = 0.6858 .

Appendix C.1.2. Parameters obtained from Europe data

Ω t Heston ( X ) = κ t = 1.0 , v 0 , t = 0.0013 + 0.0094 · VIX t 2 , v ¯ t = 0.0518 + 0.0100 · VIX filter t 2 , γ t = 0.0571 + 0.0252 · VIX t , ρ t = 0.6471 .

Appendix C.2. Predicted Parameters

Appendix C.2.1. Parameters predicted for UK

Figure A1. Prediction results Heston parameters of the UK dataset.
Figure A1. Prediction results Heston parameters of the UK dataset.
Jrfm 11 00067 g0a1

Appendix C.2.2. Parameters predicted for Europe

Figure A2. Prediction results Heston parameters of the Europe dataset.
Figure A2. Prediction results Heston parameters of the Europe dataset.
Jrfm 11 00067 g0a2

References

  1. Alexander, Carol, Andreas Kaeck, and Leonardo M. Nogueira. 2009. Model risk adjusted hedge ratios. Journal of Futures Markets 29: 1021–49. [Google Scholar] [CrossRef]
  2. Andersen, Leif. 2008. Simple and efficient simulation of the Heston stochastic volatility model. Journal of Computational Finance 11: 1–42. [Google Scholar] [CrossRef]
  3. Audrino, Francesco, and Dominik Colangelo. 2010. Semi-parameteric forecasts of the implied volatility surface using regression trees. Statistics and Computing 20: 421–34. [Google Scholar] [CrossRef]
  4. Bakshi, Gurdip, Charles Cao, and Zhiwu Chen. 1997. Empirical performance of alternative option pricing models. The Journal of Finance 52: 2003–49. [Google Scholar] [CrossRef]
  5. Bauer, Daniel, Daniela Bergmann, and Andreas Reuss. 2010. Solvency II and nested simulations—A least-squares Monte Carlo approach. Paper presented at the 2010 ICA Congress, Cape Town, South Africa, March 7–12. [Google Scholar]
  6. Bikker, Jacob A., and Haixia Hu. 2015. Cyclical patterns in profits, provisioning and lending of banks and procyclicality of the new Basel capital requirements. PSL Quarterly Review 55. Available online: https://ojs.uniroma1.it/index.php/PSLQuarterlyReview/article/view/9907 (accessed on 10 October 2017).
  7. CBOE. 2015. The CBOE Volatility Index—VIX: The Powerful and Flexible Trading and Risk Managment Tool from the Chicago Board Options Exchange. Available online: https://www.cboe.com/micro/vix/vixwhite.pdf (accessed on 21 October 2018).
  8. Cont, Rama, Jose da Fonseca, and Valdo Durrleman. 2002. Stochastic models of implied volatility surfaces. Economic Notes 31: 361–77. [Google Scholar] [CrossRef]
  9. Devineau, Laurent, and Stéphane Loisel. 2009. Risk aggregation in Solvency II: How to converge the approaches of the internal models and those of the standard formula? Bulletin Français d’Actuariat 9: 107–45. [Google Scholar]
  10. Duan, Jin-Chuan, and Chung-Ying Yeh. 2012. Price and Volatility Dynamics Implied by the VIX Term Structure Price and Volatility Dynamics Implied by the VIX Term Structure. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1788252 (accessed on 10 October 2017).
  11. Gauthier, Pierre, and Pierre-Yves H. Rivaille. 2009. Fitting the Smile, Smart Parameters for SABR and Heston. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1496982 (accessed on 21 October 2017).
  12. Hansen, Lars Peter. 1982. Large sample properties of generalized method of moments estimators. Econometrica 50: 1029–54. [Google Scholar] [CrossRef]
  13. Heston, Steven L. 1993. A closed-form solution for options with stochastic volatility with applications to bond and currency options. The Review of Financial Studies 6: 327–43. [Google Scholar] [CrossRef]
  14. Jain, Shashi, Patrik Karlsson, and Drona Kandhai. 2016. KVA, Mind Your P’s and Q’s! Available online: https://ssrn.com/abstract=2792956 (accessed on 22 October 2017).
  15. Kenyon, Chris, Andrew David Green, and Mourad Berrahoui. 2015. Which Measure for PFE? The Risk Appetite Measure A . Available online: https://ssrn.com/abstract=2703965 (accessed on 21 November 2017).
  16. Longstaff, Francis A., and Eduardo S. Schwartz. 2001. Valuing American options by simulation: A simple least-squares approach. The Review of Financial Studies 14: 113–47. [Google Scholar] [CrossRef]
  17. Milevsky, Moshe A., and Thomas S. Salisbury. 2001. The real option to lapse a variable annuity: Can surrender charges complete the market. Paper presented at the 11th Annual International AFIR Colloquium, Toronto, ON, Canada, September 6–7. [Google Scholar]
  18. Mixon, Scott. 2002. Factors explaining movements in the implied volatility surface. Journal of Futures Markets 22: 915–37. [Google Scholar] [CrossRef]
  19. Pykhtin, Michael. 2012. Model foundations of the Basel III standardised CVA charge. Risk 25: 60. [Google Scholar]
  20. Ruiz, Ignacio. 2014. Backtesting counterparty risk: How good is your model? Journal of Computational Finance 10: 87–120. [Google Scholar] [CrossRef]
  21. Singor, Stefan N., Alex Boer, Josephine S. C. Alberts, and Cornelis W. Oosterlee. 2017. On the modelling of nested risk-neutral stochastic processes with applications in insurance. Applied Mathematical Finance 24: 1–35. [Google Scholar] [CrossRef]
  22. Stein, Harvey J. 2016. Fixing risk neutral risk measures. International Journal of Theoretical and Applied Finance 19: 1650021. [Google Scholar] [CrossRef]
Figure 1. Evolution of the Heston parameters over time.
Figure 1. Evolution of the Heston parameters over time.
Jrfm 11 00067 g001
Figure 2. Mean error and mean squared error for different hedging strategies performed on monthly historical data.
Figure 2. Mean error and mean squared error for different hedging strategies performed on monthly historical data.
Jrfm 11 00067 g002
Figure 3. Prediction results Heston parameters of the US dataset.
Figure 3. Prediction results Heston parameters of the US dataset.
Jrfm 11 00067 g003
Figure 4. Probability density functions of the one-year loss distribution for a variable annuity with the GMAB rider, under the original and time-dependent risk-neutral measure. Scenario 1 = average initial volatility; Scenario 2 = low initial volatility; and Scenario 3 = high initial volatility.
Figure 4. Probability density functions of the one-year loss distribution for a variable annuity with the GMAB rider, under the original and time-dependent risk-neutral measure. Scenario 1 = average initial volatility; Scenario 2 = low initial volatility; and Scenario 3 = high initial volatility.
Jrfm 11 00067 g004
Figure 5. SCR over time under the different risk-neutral measures.
Figure 5. SCR over time under the different risk-neutral measures.
Jrfm 11 00067 g005
Table 1. Hedge errors of the different hedging strategies. The standard errors of the estimates are given in parentheses.
Table 1. Hedge errors of the different hedging strategies. The standard errors of the estimates are given in parentheses.
Frequency E ¯ Mean E ¯ Std
classical Delta-Vega:
Once per week−0.339 (0.0207)0.228 (0.0084)
Once per day−0.354 (0.0227)0.210 (0.0080)
Adjusted Delta-Vega:
Once per week−0.204 (0.0097)0.137 (0.0072)
Once per day−0.220 (0.0071)0.114 (0.0047)
Full hedge:
Once per week−0.044 (0.0040)0.068 (0.0042)
Once per day−0.051 (0.0030)0.045 (0.0022)
Table 2. Out-of-sample accuracy of the regression models according to the error measures defined in Equation (40).
Table 2. Out-of-sample accuracy of the regression models according to the error measures defined in Equation (40).
SSE MAE R 2 R Min 2
US
VIX Heston0.22860.01240.89480.8159
Unrestricted0.01850.00340.99150.9798
UK
VIX Heston0.05770.00930.93400.7401
Unrestricted0.01260.00450.98550.9690
Europe
VIX Heston0.04330.00780.93890.7693
Unrestricted0.01540.00510.97830.9190
Table 3. General contract details of the GMAB rider.
Table 3. General contract details of the GMAB rider.
ParameterValue
F 0 1000
G1000
T GMAB 10
r0.04
Table 4. Initial values and fair premiums of the GMAB rider.
Table 4. Initial values and fair premiums of the GMAB rider.
ParameterScenario 1Scenario 2Scenario 3
v 0 0.040.010.27
v ¯ 0 0.080.0250.24
γ 0 0.550.051.4
α GMAB 0.01740.00570.0345
Table 5. Solvency Capital Requirement of the scenarios for the original and time-dependent risk-neutral measure.
Table 5. Solvency Capital Requirement of the scenarios for the original and time-dependent risk-neutral measure.
OriginalTime-Dependent
Scenario 1170.1195.5
Scenario 2166.9237.5
Scenario 3178.8150.8
Table 6. Difference in SCR between original and time-dependent risk-neutral measure.
Table 6. Difference in SCR between original and time-dependent risk-neutral measure.
Value%
Mean absolute difference41.128.7%
Maximum absolute difference85.352.0%

Share and Cite

MDPI and ACS Style

Van Dijk, M.T.P.; De Graaf, C.S.L.; Oosterlee, C.W. Between ℙ and ℚ: The ℙ Measure for Pricing in Asset Liability Management. J. Risk Financial Manag. 2018, 11, 67. https://doi.org/10.3390/jrfm11040067

AMA Style

Van Dijk MTP, De Graaf CSL, Oosterlee CW. Between ℙ and ℚ: The ℙ Measure for Pricing in Asset Liability Management. Journal of Risk and Financial Management. 2018; 11(4):67. https://doi.org/10.3390/jrfm11040067

Chicago/Turabian Style

Van Dijk, Marcel T. P., Cornelis S. L. De Graaf, and Cornelis W. Oosterlee. 2018. "Between ℙ and ℚ: The ℙ Measure for Pricing in Asset Liability Management" Journal of Risk and Financial Management 11, no. 4: 67. https://doi.org/10.3390/jrfm11040067

Article Metrics

Back to TopTop