Skip to main content
Log in

Jump-robust estimation of volatility with simultaneous presence of microstructure noise and multiple observations

  • Published:
Finance and Stochastics Aims and scope Submit manuscript

Abstract

In this paper, we develop the multipower estimators for the integrated volatility in (Barndorff-Nielsen and Shephard in J. Financ. Econom. 2:1–37, 2004); these estimators allow the presence of jumps in the underlying driving process and the simultaneous presence of microstructure noise and multiple records of observations. By multiple records we mean more than one observation recorded on a single time stamp, as often seen in stock markets, in particular, for heavily traded securities, for a data set with even millisecond frequency. We establish the consistency and asymptotic normality of the estimators for both noise-free and noise-present cases. Simulation studies confirm our theoretical results. We apply the estimators to a real high-frequency data set.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. As introduced in the Appendix, \(A_{i}(p)\) is the set \(\{k\in{\mathbb {N}}: a_{i}(p)\leq k< b_{i}(p)\}\) with \(a_{i}(p)=2di(p+1)\) and \(b_{i}(p)=a_{i}(p)+2dp\).

  2. It is not hard to compute \(\beta_{j}\) by a Monte Carlo procedure; the Matlab code is available upon request.

  3. Here, we make some necessary adjustments to the original preaveraging estimator so that the estimator is based on multiple observations; \(k_{M}\) is an integer, taken as \(k_{M}=\lfloor\theta \sqrt{\frac{1}{\Delta}}\rfloor\) for some positive constant \(\theta\); \(g\) is a positive real function vanishing outside \((0,1)\) and continuous and piecewise \(C^{1}\), its derivative \(g'\) is piecewise Lipschitz, and \(\lim_{x\rightarrow0}\frac{g(x)}{\sqrt{x}}=0\).

  4. Here, we add 2 to the \(P_{i}\) to ensure at least two observations at all time points, which is necessary to implement the range-based estimator.

  5. Hence, the \(L_{i}\) are a sequence of i.i.d. positive integer-valued random variables.

  6. The original RBE is available only when \(L_{i}\equiv L\); here, we do some adjustments to the weight so that it applies to the general case.

  7. This can be obtained by simulation; the \(U_{i}\) are a sequence of independent random variables, generated from a discrete uniform distribution in the set \(\{1,2,\dots, L_{i}\}\). The simulation code is available from the author upon request.

References

  1. Aït-Sahalia, Y., Jacod, J.: Is Brownian motion necessary to model high frequency data? Ann. Stat. 38, 3093–3128 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  2. Aït-Sahalia, Y., Jacod, J.: Analyzing the spectrum of asset returns: jump and volatility components in high frequency data. J. Econ. Lit. 50, 1007–1050 (2012)

    Article  Google Scholar 

  3. Aït-Sahalia, Y., Xiu, D.: Increased correlation among asset classes: are volatility or jumps to blame, or both? J. Econom. 194, 205–219 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  4. Aldous, D., Eagleson, G.: On mixing and stability of limit theorems. Ann. Probab. 6, 325–331 (1978)

    Article  MathSciNet  MATH  Google Scholar 

  5. Andersen, T.G., Bollerslev, T., Diebold, F., Labys, P.: The distribution of realized exchange rate volatility. J. Am. Stat. Assoc. 96, 42–55 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  6. Andersen, T.G., Bollerslev, T., Diebold, F., Labys, P.: Modeling and forecasting realized volatility. Econometrica 71, 579–625 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  7. Andersen, T.G., Dobrev, D., Schaumburg, E.: Continuous-time models, realized volatilities, and testable distributional implications for daily stock returns. J. Appl. Econom. 25, 233–261 (2010)

    Article  MathSciNet  Google Scholar 

  8. Bachelier, L.: Théorie de la Spéculation. Gauthier-Villars, Paris (1900)

    MATH  Google Scholar 

  9. Barndorff-Nielsen, O., Graversen, S., Jacod, J., Podolskij, M., Shephard, N.: A central limit theorem for realised power and bipower variations of continuous semimartingales. In: Kabanov, Y., Liptser, R. (eds.) From Stochastic Analysis to Mathematical Finance, Festschrift for Albert Shiryaev, pp. 33–68. Springer, Berlin (2006)

    Chapter  Google Scholar 

  10. Barndorff-Nielsen, O.E., Hansen, P.R., Lunde, A., Shephard, N.: Designing realised kernels to measure the ex-post variation of equity prices in the presence of noise. Econometrica 76, 1481–1536 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  11. Barndorff-Nielsen, O.E., Hansen, P.R., Lunde, A., Shephard, N.: Multivariate realised kernels: consistent positive semi-definite estimators of the covariation of equity prices with noise and non-synchronous trading. J. Econom. 162, 149–169 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  12. Barndorff-Nielsen, O.E., Shephard, N.: Power and bipower variation with stochastic volatility and jumps. J. Financ. Econom. 2, 1–37 (2004)

    Article  Google Scholar 

  13. Black, F., Scholes, M.: The pricing of options and corporate liabilities. J. Polit. Econ. 81, 637–654 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  14. Christensen, K., Kinnebrock, S., Podolskij, M.: Pre-averaging estimators of the expost covariance matrix in noisy diffusion models with non-synchronous data. J. Econom. 159, 116–133 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  15. Christensen, K., Podolskij, M.: Asymptotic theory of range-based multipower variation. J. Financ. Econom. 10, 417–456 (2012)

    Article  Google Scholar 

  16. Cont, R., Tankov, P.: Financial Modelling with Jump Processes. Chapman & Hall/CRC Press, London (2004)

    MATH  Google Scholar 

  17. Delbaen, F., Schachermayer, W.: A general version of the fundamental theorem of asset pricing. Math. Ann. 300, 463–520 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  18. Dimson, E.: Risk measurement when shares are subject to infrequent trading. J. Financ. Econ. 7, 197–226 (1979)

    Article  Google Scholar 

  19. Engle, R.F.: Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation. Econometrica 50, 987–1007 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  20. Hayashi, T., Jacod, J., Yoshida, N.: Irregular sampling and central limit theorems for power variations: the continuous case. Ann. Inst. Henri Poincaré Probab. Stat. 47, 1197–1218 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  21. Hayashi, T., Yoshida, N.: Asymptotic normality of a covariance estimator for nonsynchronously observed diffusion processes. Ann. Inst. Stat. Math. 60, 367–406 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  22. Heston, S.L.: A closed-form solution for options with stochastic volatility with applications to bond and currency options. Rev. Financ. Stud. 6, 327–343 (1993)

    Article  Google Scholar 

  23. Hudson, W.N., Mason, J.D.: Variational sums for additive processes. Proc. Am. Math. Soc. 55, 395–399 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  24. Jacod, J., Li, Y., Mykland, P.A., Podolskij, M., Vetter, M.: Microstructure noise in the continuous case: the pre-averaging approach. Stoch. Process. Appl. 119, 2249–2276 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  25. Jacod, J., Podolskij, M., Vetter, M.: Limit theorems for moving averages of discretized processes plus noise. Ann. Stat. 38, 1478–1545 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  26. Jacod, J., Protter, P.: Discretization of Processes. Springer, New York (2012)

    Book  MATH  Google Scholar 

  27. Jacod, J., Shiryayev, A.V.: Limit Theorems for Stochastic Processes, 2nd edn. Springer, New York (2003)

    Book  Google Scholar 

  28. Jing, B., Kong, X., Liu, Z., Mykland, P.: On the jump activity index for semimartingales. J. Econom. 166, 213–223 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  29. Jing, B., Liu, Z., Kong, X.: Estimating the volatility functionals with multiple transactions. Econom. Theory 33, 331–365 (2017)

    Article  MathSciNet  Google Scholar 

  30. Li, Y., Mykland, P., Renault, E., Zhang, L., Zheng, X.: Realized volatility when sampling times are possibly endogenous. Econom. Theory 30, 580–605 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  31. Liu, Z.: Estimating integrated co-volatility with partially miss-ordered high frequency data. Stat. Inference Stoch. Process. 19, 175–197 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  32. Merton, R.C.: Theory of rational option pricing. Bell J. Econ. Manag. Sci. 4, 141–183 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  33. Podolskij, M., Vetter, M.: Bipower-type estimation in a noisy diffusion setting. Stoch. Process. Appl. 119, 2803–2831 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  34. Renyi, A.: On stable sequences of events. Sankhya, Ser. A 25, 293–302 (1963)

    MathSciNet  MATH  Google Scholar 

  35. Revuz, D., Yor, M.: Continuous Martingales and Brownian Motion, 3rd edn. Springer, New York (2005)

    MATH  Google Scholar 

  36. Todorov, V., Tauchen, G.: Activity signature functions for high frequency data analysis. J. Econom. 154, 125–138 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  37. Woerner, J.H.C.: Power and multipower variation: inference for high frequency data. In: Shiryaev, A.N., et al. (eds.) Stochastic Finance, pp. 343–363. Springer, Boston (2006)

    Chapter  Google Scholar 

  38. Xiu, D.: Quasi-maximum likelihood estimation of volatility with high frequency data. J. Econom. 159, 235–250 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  39. Zhang, L.: Efficient estimation of stochastic volatility using noisy observations: a multi-scale approach. Bernoulli 12, 1019–1043 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  40. Zhang, L., Mykland, P., Aït-Sahalia, Y.: A tale of two time scales: determining integrated volatility with noisy high-frequency data. J. Am. Stat. Assoc. 100, 1394–1411 (2005)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhi Liu.

Additional information

The author would like to thank the Editor, the Associate Editor, and two anonymous referees for their very extensive and constructive suggestions that helped to improve the paper considerably. Special thanks go to the Editor Professor Martin Schweizer for his kind help on polishing the manuscript. The work is partially supported by FDCT of Macau (No. 078/2013/A3) and NSFC (No. 11401607).

Appendix: Technical proofs

Appendix: Technical proofs

We need some notation to simplify the presentation of our proofs. For the process \(X\), we set

$$\begin{aligned} \alpha_{i} =&\sqrt{\frac{1}{L_{i}}\sum_{k=1}^{L_{i}} \bigg(\frac {k-1}{L_{i}}\bigg)^{2}},\qquad \alpha'_{i}=\sqrt{\frac{1}{L_{i}}\sum _{k=1}^{L_{i}}\bigg(1-\frac{k-1}{L_{i}}\bigg)^{2}} \qquad (\alpha _{0}=\alpha_{0}'=0),\\ \xi_{i} =&\sum_{j=1}^{L_{i}}\bigg(\frac{j-1}{L_{i}}\bigg)\Delta_{i,j}X\qquad (\xi_{0}=0),\\ \xi'_{i} =&\sum_{j=1}^{L_{i}}\bigg(1-\frac{j-1}{L_{i}}\bigg)\Delta _{i,j}X\qquad (\xi'_{0}=0),\\ \kappa_{i,\ell} =&\sigma_{s_{\ell}}\sum_{j=1}^{L_{i}}\bigg(\frac {j-1}{L_{i}}\bigg)\Delta_{i,j}W\qquad (\kappa_{0,0}=0),\\ \kappa'_{i,\ell} =&\sigma_{s_{\ell}}\sum_{j=1}^{L_{i}}\bigg(1-\frac {j-1}{L_{i}}\bigg)\Delta_{i,j}W \qquad(\kappa'_{0,0}=0, \kappa '_{0,1}=\kappa_{1,0}),\\ \mu_{i} =&\xi_{i}+\xi'_{i+1},\\ \theta_{i,\ell} =&\kappa_{i,\ell}+\kappa'_{i+1,\ell}. \end{aligned}$$

From the standard localization procedure, which essentially says that if a convergence holds for semimartingales with bounded coefficients, then it also holds for semimartingales with locally bounded coefficients (details can be found in Lemma 3.4.5 of [26]), there is no loss of generality in imposing the following assumption.

Assumption A.1

The coefficient processes of \(X\) are bounded.

To prove Theorem 3.1, we need to prove that

$$ \sum_{i=0}^{M-2d}\frac{\prod_{k=1}^{d}|\mu_{i+2k-1}|^{r_{k}}}{\gamma _{i,d}}\Delta\xrightarrow{\,\,P\,\,}\int_{0}^{t}\sigma_{s}^{2}ds \qquad\mbox{as} ~M\rightarrow\infty. $$

We start from an auxiliary lemma.

Lemma A.2

From Assumptions  2.22.3, and  A.1 we have

$$\begin{aligned} \sum_{i=0}^{M-2d}\frac{|\prod_{k=1}^{d}|\mu_{i+2k-1}|^{r_{k}}-\prod _{k=1}^{d}|\theta_{i+2k-1,i}|^{r_{k}}|}{\gamma_{i,d}}\Delta\xrightarrow{\,\, P\,\,} 0. \end{aligned}$$

Proof

From the elementary inequality

$$ \bigg|\prod_{k=1}^{d}|A_{k}|-\prod_{k=1}^{d}|B_{k}|\bigg|\leq\sum_{k=1}^{d}\bigg(\prod_{j=1}^{k-1}|B_{j}| \, |A_{k}-B_{k}|\prod_{j=k+1}^{d}|A_{j}|\bigg), $$

Hölder’s inequality (generalized version), and the Burkholder–Davis–Gundy inequality, recalling that \(\Delta:=\frac {t}{M}\), we can show that

$$\begin{aligned} E&\bigg[\sum_{i=0}^{M-2d}\frac{|\prod_{k=1}^{d}|\mu ^{r_{k}}_{i+2k-1}|-\prod_{k=1}^{d}|\theta^{r_{k}}_{i+2k-1,i}||}{\gamma _{i,d}}\Delta\bigg]\\ \leq&E\bigg[\sum_{i=0}^{M-2d} \frac{\Delta}{\gamma_{i,d}} \\ &\phantom{E\bigg[\sum_{i=0}^{M-2d}}\times{\sum_{k=1}^{d}(||\mu ^{r_{k}}_{i+2k-1}|-|\theta^{r_{k}}_{i+2k-1,i}||\prod_{0< j< k}|\theta^{r_{j}}_{i+2j-1,i} |\prod_{k< j< d}|\mu^{r_{j}}_{i+2j-1}|)}\bigg]\\ \leq&\sum_{i=0}^{M-2d}\frac{1}{\gamma_{i,d}}\bigg(\sum_{k=1}^{d}\big(E\big[\big||\mu^{r_{k}}_{i+2k-1}|-|\theta^{r_{k}}_{i+2k-1,i}|\big|^{\frac {2}{r_{k}}}\big]\big)^{\frac{r_{k}}{2}}\\ &\phantom{\sum_{i=0}^{M-2d}\frac{1}{\gamma_{i,d}}\Bigg(\sum _{k=1}^{d}}\times\prod_{0< j< k}\big(E\big[|\theta^{r_{j}}_{i+2j-1,i}|^{\frac {2}{r_{j}}}\big]\big)^{\frac{r_{j}}{2}} \prod_{k< j< d}\big(E\big[|\mu ^{r_{j}}_{i+2j-1}|^{\frac{2}{r_{j}}}\big]\big)^{\frac{r_{j}}{2}}\bigg)\Delta\\ =&\sum_{i=0}^{M-2d}\frac{1}{\gamma_{i,d}}\bigg(\sum_{k=1}^{d}\big(E\big[\big||\mu^{r_{k}}_{i+2k-1}|-|\theta^{r_{k}}_{i+2k-1,i}|\big|^{\frac {2}{r_{k}}}\big]\big)^{\frac{r_{k}}{2}}\\ &\phantom{\sum_{i=0}^{M-2d}\frac{1}{\gamma_{i,d}}\bigg(\sum _{k=1}^{d}}\times\prod_{0< j< k}\big(E\big[|\theta^{2}_{i+2j-1,i}|\big]\big)^{\frac{r_{j}}{2}} \prod_{k< j< d}\big(E\big[|\mu^{2}_{i+2j-1}|\big]\big)^{\frac{r_{j}}{2}}\bigg)\Delta\\ \leq&K_{{\mathbf{r}},d}\sum_{i=0}^{M-2d}\frac{\sum_{k=1}^{d}(\Delta ^{r_{k}}+\Delta^{\frac{r_{k}-1}{2}}\Delta{\mathbf{1}}_{\{r_{k}>1\}})\Delta^{2-\frac {r_{k}}{2}}}{\gamma_{i,d}}\\ \leq&K_{{\mathbf{r}},d}\sum_{i=0}^{M-2d}\frac{\Delta^{2}\sum_{k=1}^{d}(\Delta ^{\frac{r_{k}}{2}}+\Delta^{\frac{1}{2}}{\mathbf{1}}_{\{r_{k}>1\}})}{\gamma _{i,d}}\longrightarrow0 \qquad\mbox{as}~ M\rightarrow\infty. \end{aligned}$$

We have used the elementary inequalities \(||x+y|^{p}-|x|^{p}|\leq K(|y|^{p}+|x|^{p-1}|y|)\) when \(p>1\) and \(||x+y|^{p}-|x|^{p}|\leq|y|^{p}\) when \(p\leq1\). □

Proof of Theorem 3.1

In view of Lemma A.2, it suffices to show that

$$\begin{aligned} \sum_{i=0}^{M-2d}\frac{\prod_{k=1}^{d}|\theta_{i+2k-1,i}|^{r_{k}}}{\gamma _{i,d}}\Delta\xrightarrow{\,\,P\,\,} \int_{0}^{t}\sigma_{s}^{2}ds. \end{aligned}$$

Further, let

$$ \tilde{\theta}_{i}=\frac{\prod_{k=1}^{d}|\theta_{i+2k-1,i}|^{r_{k}}}{\gamma _{i,d}}\quad \text{and} \quad\tilde{\theta}'_{i}=E[\tilde{\theta }_{i}|{\mathcal{F}}_{s_{i}}]. $$

We have

$$\begin{aligned} E[(\tilde{\theta}_{i})^{2}|{\mathcal{F}}_{s_{i}}]\leq K \end{aligned}$$

and in particular \(\tilde{\theta}'_{i}\leq K\) by Assumption A.1. Since \(E[(\tilde{\theta}_{i}-\tilde{\theta}'_{i})(\tilde{\theta}_{j}-\tilde {\theta}'_{j})]=0\) when \(|i-j|\geq2d\), Assumption 2.3 and the Burkholder–Davis–Gundy inequality yield

$$\begin{aligned} E\bigg[\bigg|\sum_{i=0}^{M-2d}(\tilde{\theta}_{i}\Delta-\tilde{\theta }'_{i}\Delta)\bigg|^{2}\bigg] =&E\bigg[\sum_{i=0}^{M-2d}\sum _{j=0}^{M-2d}(\tilde{\theta}_{i}-\tilde{\theta}'_{i})(\tilde{\theta }_{j}-\tilde{\theta}'_{j}) \Delta^{2}\bigg]\\ \leq& K_{d}\sum_{i=0}^{M-2d}E[(\tilde{\theta}_{i}-\tilde{\theta }'_{i})^{2}]\Delta^{2}\leq K\Delta\longrightarrow0. \end{aligned}$$

Thus, to prove the theorem, we only need to show that

$$\begin{aligned} \sum_{i=0}^{M-2d}\tilde{\theta}'_{i}\Delta\xrightarrow{\,\,P\,\,} \int _{0}^{t}\sigma_{s}^{2}ds. \end{aligned}$$

Note that

$$\begin{aligned} \bigg|\sum_{i=0}^{M-2d}\tilde{\theta}'_{i}\Delta-\int_{0}^{t}\sigma_{s}^{2}ds\bigg| \leq&\bigg|\sum_{i=0}^{M-1}\sigma_{s_{i}}^{2}\Delta-\int_{0}^{t}\sigma _{s}^{2}ds\bigg|+\bigg|\sum_{i=M-2d+1}^{M-1}\sigma^{2}_{s_{i}}\Delta\bigg| \\ \leq&K\int_{0}^{t}|\theta_{s}^{(i)}-\sigma_{s}^{2}|ds+K_{d}\Delta, \end{aligned}$$

where \(\theta_{s}^{(i)}=\tilde{\theta}'_{i}=\sigma^{2}_{s_{\max\{i:s_{i}\leq s\} }}\). The required result follows because \((\sigma_{s})_{s \geq0}\) is right-continuous. □

Proof of Theorem 3.4

To derive the central limit theorem, we apply the technique of “big blocks–small blocks” used in [24] and [14]. Big blocks are used to construct independent terms in the summation, and eventually the increments in these big blocks will dominate the asymptotic behavior. However, the small blocks, which are asymptotically negligible, will be removed from the summation. We now give the details. Given a positive integer \(p\), we define

$$\begin{aligned} a_{i}(p) =&2di(p+1),\quad b_{i}(p)=a_{i}(p)+2dp,\\ A_{i}(p) =&\{k\in{\mathbb {N}}: a_{i}(p)\leq k< b_{i}(p)\},\\ B_{i}(p) =&\{k\in{\mathbb {N}}: b_{i}(p)\leq k< a_{i+1}(p)\},\\ i_{M}(p) =&\max\{i: b_{i}(p)\leq M-2d\}=\bigg\lfloor \frac {M-2d}{2d(p+1)}\bigg\rfloor -1,\\ j_{M}(p) =&b_{i_{M}(p)}(p). \end{aligned}$$

So, the \(i\)th big block consists of \(\{\prod_{k=1}^{d}|\mu _{i+2k-1}|^{r_{k}}: i\in A_{i}(p)\}\), whereas the \(i\)th small block contains \(\{\prod_{k=1}^{d}|\mu_{i+2k-1}|^{r_{k}}: i\in B_{i}(p)\} \). Because \(\prod_{k=1}^{d}|\mu_{i+2k-1}|^{r_{k}}\) is a \(2d\)-dependent sequence (conditioning on \(\sigma\)), that is,

$$ E\bigg[\prod_{k=1}^{d}|\mu_{i+2k-1}|^{r_{k}}\prod_{k=1}^{d}|\mu _{j+2k-1}|^{r_{k}}\bigg]=E\bigg[\prod_{k=1}^{d}|\mu_{i+2k-1}|^{r_{k}}\bigg] E\bigg[\prod_{k=1}^{d}|\mu_{j+2k-1}|^{r_{k}}\bigg] $$

when \(|i-j|>2d\), after removing the small blocks, we get an independent sequence. We denote

$$ \mu_{i,m}:=E\bigg[\prod_{k=1}^{d}|\mu_{i+2k-1}|^{r_{k}}\bigg|{\mathcal {F}}_{s_{m}}\bigg],\qquad\tilde{\theta}_{i,m}:=E\bigg[\prod_{k=1}^{d}|\theta _{i+2k-1, m}|^{r_{k}}\bigg|{\mathcal {F}}_{s_{m}}\bigg], $$

and the centralized versions are

$$ \hat{\mu}_{i,m}:=\frac{1}{\gamma_{i,d}}\bigg(\prod_{k=1}^{d}|\mu _{i+2k-1}|^{r_{k}}-\mu_{i,m}\bigg), \qquad\hat{\theta}_{k,m}:=\frac {1}{\gamma_{i,d}}\bigg(\prod_{k=1}^{d}|\theta_{i+2k-1. m}|^{r_{k}}-\tilde {\theta}_{i,m}\bigg). $$

For the different kinds of blocks, we use the approximations

$$ \bar{\mu}_{k}= \left\{ \textstyle\begin{array}{ll} \hat{\mu}_{k,a_{i}(p)},\quad& k\in A_{i}(p),\\ \hat{\mu}_{k,b_{i}(p)},& k\in B_{i}(p),\\ \hat{\mu}_{k,j_{M}(p)},& k\geq j_{M}(p), \end{array}\displaystyle \right.~~~~~ \bar{\theta}_{k}= \left\{ \textstyle\begin{array}{ll} \hat{\theta}_{k,a_{i}(p)},\quad& k\in A_{i}(p),\\ \hat{\theta}_{k,b_{i}(p)},& k\in B_{i}(p),\\ \hat{\theta}_{k,j_{M}(p)},& k\geq j_{M}(p). \end{array}\displaystyle \right. $$
(A.1)

Gathering all the terms \(\bar{\mu}_{k}\) for \(k\in A_{i}(p)\) and for \(k\in B_{i}(p)\), respectively, we define

$$\begin{aligned} \varsigma_{i}(p,1)=\sum_{\ell=a_{i}(p)}^{b_{i}(p)-1}\bar{\mu}_{\ell}\Delta ,\qquad\varsigma_{i}(p,2):=\sum_{\ell=b_{i}(p)}^{a_{i+1}(p)-1}\bar{\mu }_{\ell}\Delta. \end{aligned}$$

Note that \(\varsigma_{i}(p,1)\) contains \(2dp\) summands (“big”), whereas \(\varsigma_{i}(p,2)\) contains \(2d\) summands (“small”), and because we finally let \(p\rightarrow\infty\), the small blocks are asymptotically negligible. To realize this, we set

$$\begin{aligned} N(p)=\sum_{j=0}^{i_{M}(p)}\varsigma_{j}(p,1),\qquad\tilde{N}(p)=\sum _{j=0}^{i_{M}(p)}\varsigma_{j}(p,2),\qquad C(p)=\sum_{j=j_{M}(p)}^{M-2d}\bar {\mu}_{j}\Delta. \end{aligned}$$

We then obtain

$$ {\mathrm{MPV}}(X, {\mathbf{r}})-\int_{0}^{t}\sigma_{s}^{2}ds = N(p)+\tilde{N}(p)+C(p)+R_{1}(p)+R_{2}(p), $$
(A.2)

where

$$\begin{aligned} R_{1}(p)=\sum_{i=0}^{M-2d}\frac{\bar{\mu}_{i}-\bar{\theta}_{i}}{\gamma _{i,d}}\Delta, \end{aligned}$$

and \(R_{2}(p)\) is from the error of the Riemann approximation, that is,

$$\begin{aligned} R_{2}(p)=\sum_{i=0}^{M-2d}\bar{\theta}_{i}\Delta-\int_{0}^{t}\sigma_{s}^{2}ds. \end{aligned}$$

Note that \(E[|\bar{\mu}_{j}|]\leq K\); then, \(E[|C(p)|]\leq K(p+1)\Delta\). Similarly, we can show that \(E[|N(p)|]\leq\frac{M}{2d(p+1)} K\Delta \leq\frac{K_{d}}{p+1}\). Moreover,

$$\begin{aligned} R_{2}(p) =&\sum_{i=1}^{i_{M}(p)}\bigg(\sum_{j\in A_{i}(p)}\sigma _{a_{i}(p)}^{2}-\int_{s_{a_{i}(p)}}^{s_{b_{i}(p)}}\sigma_{s}^{2}ds\bigg)\\ &+\sum_{i=1}^{i_{M}(p)}\bigg(\sum_{j\in B_{i}(p)}\sigma_{b_{i}(p)}^{2}-\int _{s_{b_{i}(p)}}^{s_{a_{i+1}(p)}}\sigma_{s}^{2}ds\bigg)+\sum _{j=j_{M}(p)}^{M-2d}\bigg(\sigma_{s_{j}}^{2}-\int_{s_{j}}^{s_{j+1}}\sigma _{s}^{2}ds\bigg)\\ =:&I_{M}+\mathit{II}_{M}+\mathit{III}_{M}, \end{aligned}$$

and \(E[|\mathit{III}_{M}|]\leq(p+1)\sqrt{\Delta} \Delta\) and \(E[|\mathit{II}_{M}|]\leq\frac {K\sqrt{\Delta}}{p+1}\). The arguments for \(I_{M}\xrightarrow{P}0\) and \(R_{1}(p)\xrightarrow{P}0\) are similar to the proof of [9, Eqs. (7.2) and (7.1)], respectively. Therefore, we have

$$ \lim_{p\rightarrow\infty}\limsup_{M\rightarrow\infty}P\big[\sqrt{M}\big(|\tilde{N}(p)|+|C(p)|+|R_{1}(p)|+|R_{2}(p)|\big)>\delta\big]=0 $$
(A.3)

for any \(\delta>0\). We can show (see Lemma A.3) that \(\sqrt {M}N(p)\xrightarrow{{\mathcal{L}}-(s)}U(p)\) for any fixed \(p\) and \(U(p)\xrightarrow{P} \int_{0}^{t}\gamma_{s}dB_{s}\). By combining this with (A.2) and (A.3) we can obtain the required result of Theorem 3.4. □

Lemma A.3

Suppose that \(X\) is a one-dimensional Itô semimartingale with representation (2.1) for which Assumptions  2.2, 3.2, and  A.1 are satisfied. Suppose also that Assumption  2.3 on the observation scheme holds with \(\Delta_{i}\equiv\frac {t}{M}\), and let \(\bar{r}=2\). Moreover, let \(p\) be a fixed positive integer.

  1. 1)

    If \(L_{i}\equiv L\), then we have

    $$ \sqrt{M}N(p)\xrightarrow{{\mathcal{L}}-(s)}\int_{0}^{t}\gamma(p)_{s}dB_{s} \qquad\textit{as } M\rightarrow\infty, $$

    where \(B\) is a standard Brownian motion (defined on an extension of \(\varOmega\)) independent of ℱ, and \(\gamma(p)\) is given by

    $$\begin{aligned} (\gamma(p)_{s})^{2} =&\Bigg(\frac{p}{p+1}\bigg(\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg)\\ &{}+2\sum_{j=1}^{2d-1}\frac{2dp-j}{2d(p+1)}\bigg(\frac{\beta_{j}}{(\alpha ^{2}+(\alpha')^{2})^{2} \prod_{k=1}^{d}m_{r_{k}}^{2}}-1\bigg)\Bigg)\sigma_{s}^{4}, \end{aligned}$$

    where \(\beta_{j}\) is defined in Theorem  3.4.

  2. 2)

    In general, if the \(L_{i}\) satisfy Assumption  3.3, then we have

    $$ \sqrt{M}\bigg({\mathrm{MPV}}(X, {\mathbf{r}})-\int_{0}^{t}\sigma_{s}^{2}ds\bigg)\xrightarrow{{\mathcal{L}}\hbox{-}(s)}\int_{0}^{t}\phi_{s}dB_{s} \qquad\textit {as} ~M\rightarrow\infty, $$

    where \(B\) is a standard Brownian motion (defined on an extension of \(\varOmega\)) independent of ℱ, and \(\gamma(p)\) is given by

    $$\begin{aligned} \gamma(p)_{s}^{2} =&\frac{1}{2d(p+1)}\bigg(2dp\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}} +2\sum_{k=1}^{d-1}(2dp-2k)\prod _{j=1}^{d-k}\frac{m_{r_{j}+r_{k+j}}}{m_{r_{j}}m_{r_{k+j}}}\\ &\phantom{\frac{1}{2d(p+1)}\bigg\{ } +2d(p+2d-4dp-1)+Q'_{d,p}(s)\bigg)\sigma_{s}^{2}, \end{aligned}$$

    where \(Q_{d,p}'(s)\) is given in Assumption  3.3.

Proof

1) Since \(L_{i}\equiv L\) and \(\Delta_{i}\equiv\Delta\), we get \(\alpha_{i}\equiv\alpha\) and \(\alpha'_{i}\equiv\alpha'\). By a martingale central limit theorem argument as presented in [27, Thm. IX.7.28], we need to verify the conditions

$$\begin{aligned} M\sum_{i=0}^{i_{M}(p)}E[\varsigma^{2}_{i}(p,1)|{\mathcal {F}}_{s_{a_{i}(p)}}] \xrightarrow{\,\,P\,\,}&\int_{0}^{t}\big(\gamma(p)\big)^{2}ds, \end{aligned}$$
(A.4)
$$\begin{aligned} M^{2}\sum_{i=0}^{i_{M}(p)}E[\varsigma^{4}_{i}(p,1)|{\mathcal {F}}_{s_{a_{i}(p)}}] \xrightarrow{\,\,P\,\,}& 0, \end{aligned}$$
(A.5)
$$\begin{aligned} \sqrt{M}\sum_{i=0}^{i_{M}(p)}E[\varsigma_{i}(p,1)\Delta W(p)_{i}|{\mathcal {F}}_{s_{a_{i}(p)}}] \xrightarrow{\,\,P\,\,}& 0, \end{aligned}$$
(A.6)
$$\begin{aligned} \sqrt{M}\sum_{i=0}^{i_{M}(p)}E[\varsigma_{i}(p,1)\Delta N(p)_{i}|{\mathcal {F}}_{s_{a_{i}(p)}}] \xrightarrow{\,\,P\,\,}& 0, \end{aligned}$$
(A.7)

where \(\Delta V(p)_{i}=V_{s_{b_{i}(p)-1}}-V_{s_{a_{i}(p)-1}}\) for any process \(V\), and \(N\) in (A.7) is any bounded martingale orthogonal to \(W\). Direct calculations show (A.5). Since \(\varsigma_{i}(p,1)\) is even as a function of \(W\), we have

$$\begin{aligned} E[\varsigma_{i}(p,1)\Delta W(p)_{i}|{\mathcal {F}}_{s_{a_{i}(p)}}]=0, \end{aligned}$$

and we deduce (A.6). The proof of (A.7) is the same as [24, Lemma 5.7] or [9]. We hence are left to prove (A.4). We have \(\Delta_{i}\equiv\Delta\), \(\alpha_{i}\equiv \alpha\), and \(\alpha'_{i}\equiv\alpha'\), thus

$$\begin{aligned} \gamma_{i,d}\equiv\big(\alpha^{2}+(\alpha')^{2}\big)\prod _{k=1}^{d}m_{r_{k}}=:\gamma_{d}. \end{aligned}$$

Hence, from (A.1), when \(\ell\in A_{i}(p)\), we have

$$\begin{aligned} E[\bar{\theta}_{\ell}^{2}|{\mathcal {F}}_{s_{a_{i}(p)}}] =&\sigma ^{4}_{s_{a_{i}(p)}}\bigg(\frac{(\alpha^{2}+(\alpha')^{2})^{2} \prod _{k=1}^{d}m_{2r_{k}}}{\gamma_{d}^{2}}-1\bigg) =\sigma^{4}_{s_{a_{i}(p)}}\bigg(\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg), \\ E[\bar{\theta}_{\ell}\bar{\theta}_{r}|{\mathcal {F}}_{s_{a_{i}(p)}}] =& \sigma^{4}_{s_{a_{i}(p)}}\bigg(\frac{1}{\gamma_{d}^{2}}{\beta_{|\ell -r|}}-1\bigg) =\sigma^{4}_{s_{a_{i}(p)}}\bigg(\frac{\beta_{|\ell-r|}}{(\alpha^{2}+(\alpha ')^{2})^{2}\prod_{k=1}^{d}m_{r_{k}}^{2}}-1\bigg) \end{aligned}$$

when \(|\ell-r|<2d\), where

$$ \beta_{|\ell-r|}:=E\bigg[\prod_{k=1}^{d}|\alpha{N}_{2k-1}+\alpha '{N}_{2k}|^{r_{k}}|\alpha{N}_{2k-1+|\ell-r|}+\alpha'{N}_{2k+|\ell -r|}|^{r_{k}}\bigg] $$

with a sequence of standard normal random variables \((N_{i})_{i \in{\mathbb {N}}}\). The cross moment between \(\bar{\theta}_{\ell}\) and \(\bar{\theta}_{r}\) vanishes when \(|\ell-r|\geq2d\). Therefore, we obtain

$$\begin{aligned} &E\bigg[\bigg(\sum_{\ell=a_{i}(p)}^{b_{i}(p)-1}\bar{\theta}_{\ell}\bigg)^{2}\Delta^{2}\bigg|{\mathcal {F}}_{s_{a_{i}(p)}}\bigg]\\ &=E\bigg[\sum_{\ell,r=a_{i}(p)}^{b_{i}(p)-1}\bar{\theta}_{\ell}\bar{\theta }_{r}\Delta^{2}\bigg|{\mathcal {F}}_{s_{a_{i}(p)}}\bigg]\\ &=\Delta^{2}\sum_{\ell=a_{i}(p)}^{b_{i}(p)-1}E[\bar{\theta}_{\ell }^{2}|{\mathcal {F}}_{s_{a_{i}(p)}}]+\Delta^{2}\sum_{\ell\neq r, \ell ,r=a_{i}(p)}^{b_{i}(p)-1}E[\bar{\theta}_{\ell}\bar{\theta}_{r}|{\mathcal {F}}_{s_{a_{i}(p)}}]\\ &= 2dp\bigg(\prod_{k=1}^{d}\frac{m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg)\sigma ^{4}_{s_{a_{i}(p)}}\Delta^{2}\\ &\phantom{=:}+2\sum_{j=1}^{2d-1}(2dp-j)\bigg(\frac{\beta_{j}}{(\alpha^{2}+(\alpha')^{2})^{2}\prod_{k=1}^{d}m_{r_{k}}^{2}}-1\bigg)\sigma ^{4}_{s_{a_{i}(p)}}\Delta^{2}\\ &=\Bigg(2dp\bigg(\prod_{k=1}^{d}\frac{m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg)+2\sum _{j=1}^{2d-1}(2dp-j)\\ &\phantom{=:\bigg\{ 2dp\bigg(\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg)+2\sum_{j=1}^{2d-1}}\times\bigg(\frac{\beta _{j}}{(\alpha^{2}+(\alpha')^{2})^{2}\prod_{k=1}^{d}m_{r_{k}}^{2}}-1\bigg)\Bigg)\sigma^{4}_{s_{a_{i}(p)}}\Delta^{2}. \end{aligned}$$

Observing that

$$\begin{aligned} E[\varsigma^{2}_{i}(p,1)|{\mathcal {F}}_{s_{a_{i}(p)}}] =&E\bigg[\bigg(\sum _{\ell=a_{i}(p)}^{b_{i}(p)-1}\bar{\mu}_{\ell}\bigg)^{2}\Delta^{2}\bigg|{\mathcal {F}}_{s_{a_{i}(p)}}\bigg]\\ =&E\bigg[\bigg(\sum_{\ell=a_{i}(p)}^{b_{i}(p)-1}\bar{\theta}_{\ell}\bigg)^{2}\Delta^{2}\bigg|{\mathcal {F}}_{s_{a_{i}(p)}}\bigg]+O_{p}(\Delta^{5/2}), \end{aligned}$$

we have

$$\begin{aligned} M\sum_{i=0}^{i_{M}(p)}E[\varsigma^{2}_{i}(p,1)|{\mathcal {F}}_{s_{a_{i}(p)}}]\xrightarrow{\,\,P\,\,}\int_{0}^{t}\big(\gamma(p)_{s}\big)^{2}ds, \end{aligned}$$

where

$$\begin{aligned} \big(\gamma(p)_{s}\big)^{2} :=&\frac{1}{2d(p+1)}\Bigg(2dp\bigg(\prod _{k=1}^{d}\frac{m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg)\\ &\phantom{\frac{1}{2d(p+1)}\Big(}+2\sum_{j=1}^{2d-1}(2dp-j)\bigg(\frac {\beta_{j}}{(\alpha^{2}+(\alpha')^{2})^{2}\prod_{k=1}^{d}m_{r_{k}}^{2}}-1\bigg)\Bigg)\sigma_{s}^{4}. \end{aligned}$$

This completes the proof of the first part of Lemma A.3.

2) For the general case, by repeating the procedure it suffices to rederive the asymptotic variance. If \(\ell\in A_{i}(p)\), then we let

$$ I_{\ell,r}=\left\{ \textstyle\begin{array}{ll} 1 &\quad \hbox{if $\ell-r$ is odd,} \\ 0 & \quad\hbox{if $\ell-r$ is even,} \end{array}\displaystyle \right. $$

when \(1\leq\ell-r\leq2d-1\). Now, we have

$$\begin{aligned} E[\bar{\theta}_{\ell}^{2}|{\mathcal {F}}_{s_{a_{i}(p)}}] =&\sigma ^{4}_{s_{a_{i}(p)}}\bigg(\frac{\prod_{k=1}^{d}(\alpha_{\ell+2k-1}^{2}+(\alpha '_{\ell+2k})^{2})^{r_{k}}m_{2r_{k}}}{\prod_{k=1}^{d}(\alpha_{\ell+2k-1}^{2} +(\alpha'_{\ell+2k})^{2})^{r_{k}}m^{2}_{r_{k}}}-1\bigg)\\ =&\sigma^{4}_{s_{a_{i}(p)}}\bigg(\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg), \\ E[\bar{\theta}_{\ell}\bar{\theta}_{r}|{\mathcal {F}}_{s_{a_{i}(p)}}] =& \left\{ \textstyle\begin{array}{ll} \sigma^{4}_{s_{a_{i}(p)}}\bigg(\prod_{k=1}^{d-\frac{\ell-r}{2}}\frac {m_{r_{k}+r_{k+\frac{\ell-r}{2}}}}{m_{r_{k}}m_{r_{k+\frac{\ell -r}{2}}}}-1\bigg) & \quad\hbox{for even $\ell-r$,} \\ \sigma^{4}_{s_{a_{i}(p)}}\bigg(\frac{E[\prod_{k=1}^{d-\frac{\ell -r-1}{2}}|\mathcal{A}_{k,\ell}|^{r_{k+\frac{\ell-r-1}{2}}}|\mathcal {B}_{k,\ell}|^{r_{k}}]}{\prod_{k=1}^{d-\frac{\ell-r-1}{2}}m_{r_{k}}m_{r_{k+\frac{\ell -r-1}{2}}}}-1\bigg) &\quad \hbox{for odd $\ell-r$,} \end{array}\displaystyle \right. \end{aligned}$$

where \(\mathcal{A}_{k,\ell}\sim{\mathcal {N}}(0,1)\), \(\mathcal{B}_{k,\ell }\sim{\mathcal {N}}(0,1)\) are i.i.d. for \(k=1,2,\dots\), and

$$\begin{aligned} {\mathrm{Cov}}({\mathcal {A}}_{k,\ell}, {\mathcal {B}}_{k,\ell})&=\alpha _{\ell+2k-1}\alpha'_{\ell+2k-1}, \\ {\mathrm{Cov}}({\mathcal {A}}_{k+1,\ell}, {\mathcal {B}}_{k,\ell})&=\alpha _{\ell+2k+1}\alpha'_{\ell+2k+1}, \\ {\mathrm{Cov}}({\mathcal {A}}_{j,\ell}, {\mathcal {B}}_{k,\ell})&= 0\quad \hbox{if}~ j>k+1. \end{aligned}$$

The cross moment between \(\theta{\mu}_{\ell}\) and \(\bar{\theta}_{r}\) vanishes when \(|\ell-r|\geq2d\). Thus,

$$\begin{aligned} E&\bigg[\bigg(\sum_{\ell=a_{i}(p)}^{b_{i}(p)-1}\bar{\theta}_{\ell}\bigg)^{2}\Delta^{2}\bigg|{\mathcal {F}}_{s_{a_{i}(p)}}\bigg]\\ =&E\bigg[\sum_{\ell,r=a_{i}(p)}^{b_{i}(p)-1}\bar{\theta}_{\ell}\bar{\theta }_{r}\Delta^{2}\bigg|{\mathcal {F}}_{s_{a_{i}(p)}}\bigg]\\ =&\sum_{\ell,r=a_{i}(p)}^{b_{i}(p)-1}E[\bar{\theta}_{\ell}\bar{\theta }_{r}\Delta^{2}|{\mathcal {F}}_{s_{a_{i}(p)}}]\\ =&\sum_{\ell=a_{i}(p)}^{b_{i}(p)-1}E[\bar{\theta}_{\ell}^{2}|{\mathcal {F}}_{s_{a_{i}(p)}}]\Delta^{2}+\sum_{\ell\neq r, \ell ,r=a_{i}(p)}^{b_{i}(p)-1}E[\bar{\theta}_{\ell}\bar{\theta}_{r}|{\mathcal {F}}_{s_{a_{i}(p)}}]\Delta^{2}\\ =&2dp\bigg(\prod_{k=1}^{d}\frac{m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg)\sigma ^{4}_{s_{a_{i}(p)}}\Delta^{2}\\ &{}+2\sum_{r=a_{i}(p)}^{b_{i}(p)-1}\sum_{k=1}^{\min\{2d-1, b_{i}(p)-1-r\} }E[\bar{\theta}_{r}\bar{\theta}_{r+k}|{\mathcal {F}}_{s_{a_{i}(p)}}]\Delta ^{2}\\ =&\sigma^{4}_{s_{a_{i}(p)}}\Delta^{2}\Bigg(2dp\bigg(\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg)+2\sum_{k=1}^{d-1}(2dp-2k)\bigg(\prod _{j=1}^{d-k} \frac{m_{r_{j}+r_{k+j}}}{m_{r_{j}}m_{r_{k+j}}}-1\bigg)\\ &{}+2\sum_{r=a_{i}(p)}^{b_{i}(p)-1}\sum_{k=1}^{\lfloor\frac {b_{i}(p)-1-r}{2}\rfloor\wedge d}\bigg(\frac{E[\prod _{j=1}^{d-k+1}|{\mathcal {A}}_{k,r+2k-1}|^{r_{j+k-1}}|{\mathcal {B}}_{k,r+2k-1}|^{r_{j}}]}{\prod_{j=1}^{d-k+1}m_{r_{j}}m_{r_{j+k-1}}}-1\bigg)\Bigg)\\ =&\sigma^{4}_{s_{a_{i}(p)}}\Delta^{2}\Bigg(2dp\bigg(\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg)+2\sum_{k=1}^{d-1}(2dp-2k)\bigg(\prod _{j=1}^{d-k} \frac{m_{r_{j}+r_{k+j}}}{m_{r_{j}}m_{r_{k+j}}}-1\bigg)\\ &\phantom{\sigma^{4}_{s_{a_{i}(p)}}\Delta^{2}\Bigg(}{}+2\sum _{r=a_{i}(p)}^{b_{i}(p)-1}\sum_{k=1}^{\lfloor\frac{b_{i}(p)-1-r}{2}\rfloor \wedge d}\bigg(\frac{f_{d,k,r}}{g_{d,k}}-1\bigg)\Bigg), \end{aligned}$$

where the first term is the collection of \(E[(\bar{\theta }_{r})^{2}|{\mathcal {F}}_{s_{a_{i}(p)}}]\), the second term is the collection of \(E[\bar{\theta}_{\ell}\bar{\theta}_{r}|{\mathcal {F}}_{s_{a_{i}(p)}}]\) when \(|\ell-r|\) is even, and the third term is the collection of \(E[\bar {\theta}_{\ell}\bar{\theta}_{r}|{\mathcal {F}}_{s_{a_{i}(p)}}]\) when \(|\ell -r|\) is odd, \(\ell,r\in A_{i}(p)\). In view of Assumption 3.3, we obtain

$$ M\sum_{i=0}^{i_{M}(p)}E[\varsigma^{2}_{i}(p,1)|{\mathcal {F}}_{s_{a_{i}(p)}}]\xrightarrow{\,\,P\,\,}\int_{0}^{t}\gamma(p)_{s}^{2}ds, $$

where

$$\begin{aligned} \gamma(p)_{s}^{2} =&\frac{1}{2d(p+1)}\bigg(2dp\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}} +2\sum_{k=1}^{d-1}(2dp-2k)\prod _{j=1}^{d-k}\frac{m_{r_{j}+r_{k+j}}}{m_{r_{j}}m_{r_{k+j}}}\\ & \phantom{\frac{1}{2d(p+1)}\Big(}{} +2d(p+2d-4dp-1)+Q'_{d,p}(s)\bigg)\sigma_{s}^{2}. \end{aligned}$$

We thus complete the proof of Lemma A.3. □

We now return to the proof of Theorem 3.4. In view of

$$\begin{aligned} \gamma(p)_{s}^{2}\longrightarrow\gamma_{s}^{2}\qquad \text{as $p\rightarrow \infty$} \end{aligned}$$

for both cases, we obtain the required proof of the theorem.  □

Proof of Theorem 3.5

It suffices for us to derive the form of the “asymptotic variance.” Denoting now

$$\begin{aligned} a_{i}(p)=4di(p+1),\qquad b_{i}(p)=a_{i}(p)+4dp,\qquad i_{M}(p)=\bigg\lfloor \frac{M-2d}{4d(p+1)}\bigg\rfloor -1 \end{aligned}$$

for \(i=1,\dots, \lfloor\frac{M-2d}{2}\rfloor\), and

$$ \tilde{\theta}_{i,m}:=E\bigg[\prod_{k=1}^{d}|\theta_{2i+2k-2, m}|^{r_{k}}\bigg|{\mathcal {F}}_{s_{m}}\bigg],\qquad\hat{\theta}_{k,m}:=\frac {1}{\gamma_{2i,d}}\bigg(\prod_{k=1}^{d}|\theta_{2i+2k-2, m}|^{r_{k}}-\tilde {\theta}_{i,m}\bigg), $$

we obtain

$$\begin{aligned} & 4 \Delta^{2} E\bigg[\bigg(\sum_{\ell=a_{i}(p)}^{b_{i}(p)-1}\bar{\theta }_{\ell}\bigg)^{2}\bigg|{\mathcal {F}}_{s_{a_{i}(p)}}\bigg] \\ &= 4\Delta^{2} \sum_{\ell=a_{i}(p)}^{b_{i}(p)-1}E[\bar{\theta}_{\ell }^{2}|{\mathcal {F}}_{s_{a_{i}(p)}}]+ 4\Delta^{2} \sum_{\ell\neq r, \ell ,r=a_{i}(p)}^{b_{i}(p)-1}E[\bar{\theta}_{\ell}\bar{\theta}_{r}|{\mathcal {F}}_{s_{a_{i}(p)}}] \\ &=4\Bigg(2dp\bigg(\prod_{k=1}^{d}\frac{m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg)+2\sum _{k=1}^{d-1}(2dp-2k)\bigg(\prod_{j=1}^{d-k}\frac {m_{r_{j}+r_{k+j}}}{m_{r_{j}}m_{r_{k+j}}}-1\bigg)\Bigg) \sigma^{4}_{s_{a_{i}(p)}}\Delta^{2} \end{aligned}$$

since we do not include the terms of odd \(|\ell-r|\). Therefore,

$$\begin{aligned} M\sum_{i=0}^{i_{M}(p)}E[\varsigma^{2}_{i}(p,1)|{\mathcal {F}}_{s_{a_{i}(p)}}]\xrightarrow{\,\,P\,\,}\int_{0}^{t}\gamma(p)_{s}^{2}ds, \end{aligned}$$

where

$$\begin{aligned} \gamma(p)_{s}^{2} =&\frac{4}{4d(p+1)}\bigg(2dp\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}} +2\sum_{k=1}^{d-1}(2dp-2k)\prod_{j=1}^{d-k} \frac{m_{r_{j}+r_{k+j}}}{m_{r_{j}}m_{r_{k+j}}} \\ &\phantom{\frac{4}{4d(p+1)}\Big(}+2d(p+2d-2dp-2)\bigg)\sigma_{s}^{2}. \end{aligned}$$

We observe that

$$ \lim_{p\rightarrow\infty}\gamma(p)_{s}^{2}=\bigg(2\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}}+4\sum_{k=1}^{d-1}\prod_{j=1}^{d-k}\frac {m_{r_{j}+r_{k+j}}}{m_{r_{j}}m_{r_{k+j}}}+2-4d\bigg) \sigma_{s}^{2}. $$

This completes the proof of Theorem 3.5. □

Proof of Theorem 4.1

In view of Theorem 3.1, letting

$$ Y(t)=\int_{0}^{t}b_{s}ds+\int_{0}^{t}\sigma_{s}dW_{s} \quad \text{and}\quad Z(t)=X(t)-Y(t), $$

we need to prove that

$$\begin{aligned} \sum_{i=0}^{M-2d}\frac{(\prod_{k=1}^{d}|\bar{X}_{s_{i+2k}}-\bar {X}_{s_{i+2k-1}}|^{r_{k}}-\prod_{k=1}^{d}|\bar{Y}_{s_{i+2k}}-\bar {Y}_{s_{i+2k-1}}|^{r_{k}})}{\gamma_{i,d}}\Delta \xrightarrow{\,\,P\,\,} 0. \end{aligned}$$

For convenience, we denote

$$ a_{i,k}:=\frac{\bar{Y}_{s_{i+2k}}-\bar{Y}_{s_{i+2k-1}}}{(\alpha _{i+2k-1}^{2}\Delta+(\alpha'_{i+2k})^{2}\Delta)^{1/2}},\qquad b_{i,k}:=\frac {\bar{Z}_{s_{i+2k}}-\bar{Z}_{s_{i+2k-1}}}{(\alpha_{i+2k-1}^{2}\Delta+(\alpha'_{i+2k})^{2}\Delta)^{1/2}}. $$

Then, it suffices to show

$$\begin{aligned} \sum_{i=0}^{M-2d}\bigg(\prod_{k=1}^{d}|a_{i,k}+b_{i,k}|^{r_{k}}-\prod _{k=1}^{d}|a_{i,k}|^{r_{k}}\bigg)\Delta_{i}\xrightarrow{\,\,P\,\,} 0. \end{aligned}$$

Note that

$$\begin{aligned} \bigg|\prod_{k=1}^{d}|a_{i,k}+b_{i,k}|^{r_{k}}-\prod _{k=1}^{d}|a_{i,k}|^{r_{k}}\bigg| \leq&\sum_{k=1}^{d}|b_{i,k}|^{r_{k}}\prod_{j\neq k}|a_{i,j}|^{r_{j}}\\ &{}+\sum_{k=1}^{d}\prod_{j< k}|a_{i,j}|^{r_{j}}\prod_{j\geq k}|b_{i,j}|^{r_{j}}. \end{aligned}$$

Hence, it suffices to show that

$$\begin{aligned} \sum_{i=0}^{M-2d}(|b_{i,k}|^{r_{k}}\prod_{j\neq k}|a_{i,j}|^{r_{j}})\Delta \xrightarrow{\,\,P\,\,}& 0, \qquad k=1,2,\dots,d,\\ \sum_{i=0}^{M-2d}(\prod_{j< k}|a_{i,j}|^{r_{j}}\prod_{j\geq k}|b_{i,j}|^{r_{j}})\Delta \xrightarrow{\,\,P\,\,}& 0, \qquad k=1,2,\dots,d-1. \end{aligned}$$

The proofs of the first \(d-1\) convergences are similar; here, we show the case \(k=1\). We use the same technique as in [37] and split the jump process \(Z\) into two parts by a threshold \(\epsilon>0\). The first part contains the “big” jumps whose absolute values are bigger than \(\epsilon\). The second part is the rest, that is,

$$\begin{aligned} Z_{1t}^{\epsilon}=\sum_{0< s\leq t, |J(Z_{s})|>\epsilon} J(Z_{s}), \end{aligned}$$

where \(J(Z_{s})=Z_{s}-Z_{s-}\), and thus \(Z_{2t}^{\epsilon}=Z_{t}-Z_{1t}^{\epsilon}\). We define the indicator \(I_{i}(\epsilon)\) of the set \(\{|J(Z_{s})|\leq \epsilon\}\) for all \(s\in(s_{i}, s_{i+2}]\). We then use the generalized Hölder inequality with \(\frac{1}{p_{1}}+\frac {1}{p_{2}}+\cdots+\frac{1}{p_{d}}=1\) and \(\frac{1}{q_{1}}+\frac{1}{q_{2}}+\cdots +\frac{1}{q_{d}}=1\) to obtain

$$\begin{aligned} &\sum_{i=0}^{M-2d}\bigg(|b_{i,1}|^{r_{1}}\prod_{j=2}^{d}|a_{i,j}|^{r_{j}}\bigg)\Delta \\ &=\sum_{i=0}^{M-2d}\bigg(|b_{i,1}|^{r_{1}}\prod _{j=2}^{d}|a_{i,j}|^{r_{j}}\bigg)I_{i}(\epsilon)\Delta+\sum_{i=0}^{M-2d}\bigg(|b_{i,1}|^{r_{1}}\prod_{j=2}^{d}|a_{i,j}|^{r_{j}}\bigg)\big(1-I_{i}(\epsilon )\big)\Delta \\ &\leq\bigg(\sum_{i=0}^{M-2d}|b_{i,1}|^{r_{1}p_{1}}I_{i}(\epsilon)\Delta^{\frac {r_{1}p_{1}}{2}}\bigg)^{\frac{1}{p_{1}}}\prod_{j=2}^{d}\bigg(\sum _{i=0}^{M-2d}|a_{i,j}|^{r_{j}p_{j}} \Delta_{i}^{\frac{r_{j}p_{j}}{2}}\bigg)^{\frac{1}{p_{j}}} \\ &\phantom{=:}+\bigg(\sum_{i=0}^{M-2d}|b_{i,1}|^{r_{1}q_{1}}\big(1-I_{i}(\epsilon)\big)\Delta^{\frac{r_{1}q_{1}}{2}}\bigg)^{\frac {1}{q_{1}}}\prod_{j=2}^{d}\bigg(\sum_{i=0}^{M-2d}|a_{i,j}|^{r_{j}q_{j}} \Delta^{\frac{r_{j}q_{j}}{2}}\bigg)^{\frac{1}{q_{j}}}. \end{aligned}$$

If we take \(p_{j}=q_{j}=\frac{2}{r_{j}}\), then by using a method similar to that in Theorem 3.1 or the result of Theorem 1 in [29] we obtain

$$ \sum_{i=0}^{M-2d}|a_{i,j}|^{r_{j}q_{j}}\Delta^{\frac{r_{j}q_{j}}{2}}\xrightarrow {\,\,P\,\,}\int_{0}^{t}\sigma_{s}^{2}ds $$

for all \(j=1,2,\dots, d\). Now, we consider the term \(\sum _{i=0}^{M-2d}|b_{i,1}|^{r_{1}p_{1}}I_{i}(\epsilon)\Delta^{\frac{r_{1}p_{1}}{2}}\). For sufficiently large \(M\), we have \(\Delta\leq2\epsilon\), and hence \(\max_{j}(t_{j+1}-t_{j})<2\epsilon\). Further, if \(I_{j}(\epsilon)=1\), then \(Z_{t}=Z_{2t}^{\epsilon}\), and from [37] and [23] we have \(\sup_{j}|Z^{\epsilon}_{2t_{j}}-Z^{\epsilon}_{2t_{j-1}}|<2\epsilon\); thus, \(|\Delta^{\frac{1}{2}}b_{i,1}|^{r_{1}p_{1}}I_{i}(\epsilon)\leq K\epsilon\). Now, for a fixed \(\eta\) such that \(\eta>4\epsilon\), let \(c\in(\beta, p_{1}r_{1}]\). Then we obtain

$$\begin{aligned} &\limsup_{M\rightarrow\infty}\sum _{i=0}^{M-2d}|b_{i,1}|^{r_{1}p_{1}}I_{i}(\epsilon)\Delta^{\frac {r_{1}p_{1}}{2}} \\ &\leq\limsup_{M\rightarrow\infty}(K\epsilon)^{r_{1}p_{1}-c}\sum _{i=0}^{M-2d}|b_{i,1}|^{c}I_{i}(\epsilon) \\ &\leq\limsup_{M\rightarrow\infty}(K\epsilon)^{r_{1}p_{1}-c}\sum _{i=1}^{n}|Z^{\epsilon}_{2t_{i}}-Z^{\epsilon}_{2t_{j-1}}|^{c} \\ &\leq(K\epsilon)^{r_{1}p_{1}-c}\limsup_{M\rightarrow\infty}\bigg(\sum _{0< s\leq t, |J(Z_{s})|\leq\eta}|J(Z_{s})|+\sum_{i=1}^{n}|Z^{\eta}_{2t_{i}}-Z^{\eta}_{2t_{j-1}}|^{c}\bigg), \end{aligned}$$

where \(K\) is a constant independent of \(\epsilon\) and \(M\), but depending on \(\max{L_{i}}\), \(\eta\), \(d\), and \(c\). Because \(Z\) is a pure jump process of finite variation, we see that both sums are finite. Furthermore, since we choose \(c\in(\beta,r_{1}p_{1}]\) and thus \(0\leq r_{1}p_{1}-c< r_{1}p_{1}-\beta\), we obtain \(r_{1}p_{1}-c>0\); hence, \(\sum _{i=0}^{M-2d}|b_{i,1}|^{r_{1}p_{1}}I_{i}(\epsilon)\Delta^{\frac {r_{1}p_{1}}{2}}\xrightarrow{P}0\) by letting \(\epsilon\rightarrow0\). By these choices we need \(0< r_{1}p_{1}-\beta\), and thus \(\beta< r_{1}p_{1}=2\) since we let \(p_{j}=\frac{2}{r_{j}}\) in the above, which clearly is not a restriction.

Since we have finitely many big jumps whose absolute value is larger than \(\epsilon\), recalling that \(|\Delta^{\frac{1}{2}}b_{i,1}|^{r_{1}p_{1}}I_{i}(\epsilon)\leq K\epsilon\) for small enough \(\epsilon\), we obtain

$$ \lim_{\epsilon\rightarrow0}\lim_{M\rightarrow\infty}\sum _{i=0}^{M-2d}(|b_{i,1}|^{r_{1}q_{1}}\big(1-I_{i}(\epsilon)\big)\Delta^{\frac {r_{1}q_{1}}{2}}=0. $$

Thus, we have proved the first \(d\) convergences.

We now prove the last \(d-1\) convergences. As the proofs are similar, here we only prove the case \(k=d-1\). The generalized Hölder inequality yields

$$\begin{aligned} &\sum_{i=0}^{M-2d}\bigg(|b_{i,d}|^{r_{d}}\Delta^{\frac {r_{d}}{2}}|b_{i,d-1}|^{r_{d-1}}\Delta^{\frac{r_{d-1}}{2}}\prod _{j=1}^{d-2}|a_{i,j}|^{r_{j}}\bigg)\Delta\\ &=\sum_{i=0}^{M-2d}\bigg(|b_{i,d}|^{r_{d}}\Delta^{\frac {r_{d}}{2}}|b_{i,d-1}|^{r_{d-1}}\Delta^{\frac{r_{d-1}}{2}}\prod _{j=1}^{d-2}|a_{i,j}|^{r_{j}}\Delta^{\frac{r_{j}}{2}}\bigg)I_{i+2d-4} (\epsilon)I_{i+2d-2}(\epsilon)\\ &\phantom{=:}+\sum_{i=0}^{M-2d}\bigg(|b_{i,d}|^{r_{d}}\Delta^{\frac {r_{d}}{2}}|b_{i,d-1}|^{r_{d-1}}\Delta^{\frac{r_{d-1}}{2}}\prod _{j=1}^{d-2}|a_{i,j}|^{r_{j}}\Delta^{\frac{r_{j}}{2}}\bigg)A_{i}(\epsilon)\\ &\phantom{=:}+\sum_{i=0}^{M-2d}\bigg(|b_{i,d}|^{r_{d}}\Delta^{\frac {r_{d}}{2}}|b_{i,d-1}|^{r_{d-1}}\Delta^{\frac{r_{d-1}}{2}}\prod _{j=1}^{d-2}|a_{i,j}|^{r_{j}}\Delta^{\frac{r_{j}}{2}}\bigg)A_{i}'(\epsilon)\\ &\phantom{=:}+\sum_{i=0}^{M-2d}\bigg(|b_{i,d}|^{r_{d}}\Delta^{\frac {r_{d}}{2}}|b_{i,d-1}|^{r_{d-1}}\Delta^{\frac{r_{d-1}}{2}}\prod _{j=1}^{d-2}|a_{i,j}|^{r_{j}}\Delta^{\frac{r_{j}}{2}}\bigg)A''_{i}(\epsilon), \end{aligned}$$

where

$$\begin{aligned} A_{i}(\epsilon)&=I_{i+2d-4}(\epsilon)\big(1-I_{i+2d-2}(\epsilon)\big),\\ A'_{i}(\epsilon)&= \big(1-I_{i+2d-4}(\epsilon)\big)I_{i+2d-2}(\epsilon ),\\ A''_{i}(\epsilon)&=\big(1-I_{i+2d-4}(\epsilon)\big)\big(1-I_{i+2d-2}(\epsilon)\big). \end{aligned}$$

For the first term, we obtain

$$\begin{aligned} &\sum_{i=0}^{M-2d}\bigg(|b_{i,d}|^{r_{d}}\Delta^{\frac {r_{d}}{2}}|b_{i,d-1}|^{r_{d-1}}\Delta^{\frac{r_{d-1}}{2}}\prod _{j=1}^{d-2}|a_{i,j}|^{r_{j}}\Delta^{\frac{r_{j}}{2}}\bigg)I_{i+2d-4}(\epsilon) I_{i+2d-2}(\epsilon)\\ &\leq\bigg(\sum_{i=0}^{M-2d}|b_{i,d}|^{a_{d}r_{d}}I_{i+2d-2}(\epsilon)\Delta ^{\frac{a_{d}r_{d}}{2}}\bigg)^{\frac{1}{a_{d}}}\\ &\phantom{=:}\times\bigg(\sum _{i=0}^{M-2d}|b_{i,d-1}|^{a_{d-1}r_{d-1}}I_{i+2d-4}(\epsilon) \Delta^{\frac{a_{d-1}r_{d-1}}{2}}\bigg)^{\frac{1}{a_{d-1}}}\\ &\phantom{=:}\times\prod_{j=1}^{d-2}\bigg(\sum _{i=0}^{M-2d}|a_{i,j}|^{r_{j}a_{j}}\Delta^{\frac{r_{j}a_{j}}{2}}\bigg)^{\frac{1}{a_{j}}}, \end{aligned}$$

which tends to zero as \(M\rightarrow\infty\) and \(\epsilon\rightarrow 0\). We can prove similarly that the second term and the third term also tend to zero as \(M\rightarrow\infty\). For the last term, by the same argument as in [12], for \(M\) large enough, there are no contiguous jumps because the term contains only finitely many jumps (the large jumps). Hence, for a small enough \(\epsilon\), the last term is equal to zero. This yields the desired result. □

Proof of Theorem 4.2

In view of the proof of Theorem 4.1, it suffices to show that

$$\begin{aligned} \sqrt{\Delta} \sum_{i=0}^{M-2d}\bigg(|b_{i,k}|^{r_{k}}\prod_{j\neq k}|a_{i,j}|^{r_{j}}\bigg)\Delta \xrightarrow{\,\,P\,\,}& 0, \qquad k=1,2,\dots,d,\\ \sqrt{\Delta} \sum_{i=0}^{M-2d}\bigg(\prod_{j< k}|a_{i,j}|^{r_{j}}\prod _{j\geq k}|b_{i,j}|^{r_{j}}\bigg)\Delta \xrightarrow{\,\,P\,\,}& 0, \qquad k=1,2,\dots,d-1. \end{aligned}$$

Here, we show the case \(k=1\); the other cases can be proved similarly. By the generalized Hölder inequality, we have

$$\begin{aligned} &\Delta^{\frac{1}{2}}\sum_{i=0}^{M-2d}\bigg(|b_{i,1}|^{r_{1}}\prod _{j=2}^{d}|a_{i,j}|^{r_{j}}\bigg) \\ &=\Delta^{\frac{1}{2}}\sum_{i=0}^{M-2d}\bigg(|b_{i,1}|^{r_{1}}\prod _{j=2}^{d}|a_{i,j}|^{r_{j}}\bigg)I_{i}(\epsilon)+\Delta^{\frac{1}{2}}\sum _{i=0}^{M-2d}\bigg(|b_{i,1}|^{r_{1}}\prod_{j=2}^{d}|a_{i,j}|^{r_{j}}\bigg)\big(1-I_{i}(\epsilon)\big) \\ &\leq\bigg(\sum_{i=0}^{M-2d}|b_{i,1}|^{r_{1}p_{1}}I_{i}(\epsilon)\Delta ^{1-\frac{p_{1}}{2}}\bigg)^{\frac{1}{p_{1}}}\prod_{j=2}^{d}\bigg(\sum _{i=0}^{M-2d}|a_{i,j}|^{r_{j}p_{j}}\Delta\bigg)^{\frac{1}{p_{j}}} \\ &\phantom{=:}+\bigg(\sum_{i=0}^{M-2d}|b_{i,1}|^{r_{1}q_{1}}\big(1-I_{i}(\epsilon)\big)\Delta^{1-\frac{q_{1}}{2}}\bigg)^{\frac{1}{q_{1}}}\prod _{j=2}^{d}\bigg(\sum_{i=0}^{M-2d}|a_{i,j}|^{r_{j}q_{j}}\Delta\bigg)^{\frac{1}{q_{j}}}. \end{aligned}$$

Now, to ensure that the last terms tend to zero, we require the inequalities

$$\begin{aligned} 1-\frac{q_{1}}{2}-\frac{q_{1}r_{1}}{2}>0, \qquad1-\frac{p_{1}}{2}-\frac {p_{1}r_{1}}{2}+r_{1}p_{1}-\beta>0, \end{aligned}$$

where \(r_{1}<1\) since \(q_{1}>1\) and \(\beta<1\). This, together with Theorem 3.5, yields the required result. □

Proof of Theorem 5.2

We first prove the theorem for the particular case \(L_{i}\equiv L\) and then extend it to the i.i.d. case. For any process \(Z\), let

$$\begin{aligned} \overline{\xi}_{i}(Z) =&\sum_{j=1}^{k_{M}-1}g\bigg(\frac{j}{k_{M}}\bigg)\xi _{i+j}(Z),\qquad\overline{\xi}'_{i}(Z)=\sum_{j=1}^{k_{M}-1}g\bigg(\frac {j}{k_{M}}\bigg)\xi'_{i+j}(Z),\\ \overline{\kappa}_{i,\ell}(Z) =&\sum_{j=1}^{k_{M}-1}g\bigg(\frac {j}{k_{M}}\bigg)\kappa_{i,\ell+j}(Z),\qquad\overline{\kappa}'_{i,\ell }(Z)=\sum_{j=1}^{k_{M}-1}g\bigg(\frac{j}{k_{M}}\bigg)\kappa'_{i,\ell+j}(Z). \end{aligned}$$

Further, let \(\overline{\overline{\xi}}_{i}(Z)=\sqrt{k_{M}}(\overline{\xi }_{i}(Z)+\overline{\xi}'_{i+1}(Z))\), \(\overline{\overline{\kappa }}_{i,\ell}(Z)=\sqrt{k_{M}}(\overline{\kappa}_{i,\ell}(Z)+\overline{\kappa }'_{i,\ell+1}(Z))\). We show the results for a continuous process. We can prove the robustness to the presence of jumps as in [33] or follow the procedure for the previous theorem, which is based on preaveraged increments. Let \(Y=X^{c}+\epsilon\), where \(X^{c}\) is the continuous part of \(X\). Since the drift term does not affect the asymptotic behavior, we assume that \(X\) does not contain a drift term. Then we have \(\Delta_{i,k_{M}}\overline{Y}=\frac{1}{\sqrt{k_{M}}}\overline {\overline{\xi}}_{i}(Y)\). Thus,

$$\begin{aligned} &\frac{1}{k_{M}}\sum_{i=0}^{M-dk_{M}}\bigg(\prod_{j=1}^{d}|\Delta _{i+(j-1)k_{M},k_{M}}\overline{Y}|^{r_{j}}\bigg)\\ &=\prod_{j=1}^{d}m_{r_{j}}\bigg(\int_{0}^{t}\bar{g}(2)\sigma_{s}^{2}ds +\frac{\overline{g'}(2)\omega^{2}}{L}t\bigg)+\sum_{j=1}^{3}I_{j,M}, \end{aligned}$$

where

$$\begin{aligned} I_{1,M} =&\frac{\Delta}{\theta^{2}}\sum_{i=0}^{M-dk_{M}}E\bigg[\prod _{j=1}^{d}|\overline{\overline{\kappa}}_{i,i+(j-1)k_{M}}(Y)|^{r_{j}}\bigg|{\mathcal {F}}_{s_{i}}\bigg]\\ &-\prod_{j=1}^{d}m_{r_{j}}\bigg(\int_{0}^{t}\bar{g}(2)\sigma_{s}^{2}ds+\frac {\overline{g'}(2)\omega^{2}}{L}t\bigg),\\ I_{2,M} =&\frac{\Delta}{\theta^{2}}\sum_{i=0}^{M-dk_{M}}\bigg(\prod _{j=1}^{d}|\overline{\overline{\kappa}}_{i,i+(j-1)k_{M}}(Y)|^{r_{j}}-E\bigg[\prod_{j=1}^{d}|\overline{\overline{\kappa}}_{i,i+(j-1)k_{M}}(Y)|^{r_{j}} \bigg|{\mathcal {F}}_{s_{i}}\bigg]\bigg)\\ I_{3,M} =&\frac{\Delta}{\theta^{2}}\sum_{i=0}^{M-dk_{M}}\bigg(\prod _{j=1}^{d}|\overline{\overline{\xi}}_{i+(j-1)k_{M}}(Y)|^{r_{j}}-\prod _{j=1}^{d}|\overline{\overline{\kappa}}_{i,i+(j-1)k_{M}}(Y)|^{r_{j}}\bigg). \end{aligned}$$

The convergences \(I_{3,M}\xrightarrow{P}0\) and \(I_{2,M}\xrightarrow {P}0\) follow from the same procedure used for the proofs of Lemma A.2 and Theorem 3.1. To show that \(I_{1,M}\xrightarrow{P}0\), we consider

$$ E\bigg[\prod_{j=1}^{d}|\overline{\overline{\kappa}}_{i,\ell }(Y)|^{r_{j}}\bigg|{\mathcal {F}}_{s_{i}}\bigg]=\prod_{j=1}^{d}E\bigg[|\sigma _{s_{i}}\overline{\overline{\kappa}}_{i,\ell}(W)+\overline{\overline {\kappa}}_{\ell}(\epsilon)|^{r_{j}}\bigg|{\mathcal {F}}_{s_{i}}\bigg], $$

and by some simple computations we have

$$\begin{aligned} &\sqrt{k_{M}}\overline{\overline{\kappa}}_{i,\ell}(W)\\ &=\sqrt{k_{M}}\big(\overline{\kappa}_{i,\ell}(W)+\overline{\kappa }'_{i,\ell+1}(W)\big)\\ &=\sqrt{k_{M}}\bigg(\sum_{j=1}^{k_{M}}g\Big(\frac{j}{k_{M}}\Big) \sum _{k=1}^{L}\frac{k-1}{L}\Delta_{\ell+j,k}W\\ &\phantom{=:}+\sum_{j=1}^{k_{M}-1}g\Big(\frac{j}{k_{M}}\Big) \sum_{k=1}^{L}\Big(1-\frac{k-1}{L}\Big)\Delta_{\ell+j+1,k}W\bigg)\\ &=\sqrt{k_{M}}g\Big(\frac{1}{k_{M}}\Big) \sum_{k=1}^{L}\frac{k-1}{L}\Delta _{\ell+1,k}W\\ &\phantom{=:}+\sqrt{k_{M}}\sum_{j=1}^{k_{M}-1}\sum_{k=1}^{L}\bigg(g\Big(\frac {j+1}{k_{M}}\Big) \frac{k-1}{L}+g\Big(\frac{j}{k_{M}}\Big) \Big(1-\frac {k-1}{L}\Big)\bigg)\Delta_{\ell+j+1,k}W\\ &=\sqrt{k_{M}}\sum_{j=1}^{k_{M}-1}\sum_{k=1}^{L}\bigg(g\Big(\frac {j+1}{k_{M}}\Big) \frac{k-1}{L}+g\Big(\frac{j}{k_{M}}\Big) \Big(1-\frac {k-1}{L}\Big)\bigg)\Delta_{\ell+j+1,k}W\\ &\phantom{=:}+o_{p}(1), \end{aligned}$$

where \(o_{p}(1)\) denotes a random variable that tends to 0 as \(M\to \infty\). By denoting the last term (without \(o_{p}(1)\)) as \(\chi_{\ell}^{M}\), we have \(\chi_{\ell}^{M}\overset{d}{=}A^{M}{N}_{1}\), where \({N}_{1}\) is a standard normal random variable and

$$\begin{aligned} (A^{M})^{2}&:=\frac{1}{L}\sum_{j=1}^{k_{M}-1}\sum_{k=1}^{L}\bigg(g\Big(\frac{j+1}{k_{M}}\Big) \frac{k-1}{L}+g\Big(\frac{j}{k_{M}}\Big) \Big(1-\frac{k-1}{L}\Big)\bigg)^{2} (k_{M}\Delta)\\ &\phantom{:}=\frac{\theta^{2}}{Lk_{M}}\sum_{j=1}^{k_{M}-1}\sum_{k=1}^{L}\bigg(g\Big(\frac{j+1}{k_{M}}\Big) \frac{k-1}{L}+g\Big(\frac{j}{k_{M}}\Big) \Big(1-\frac{k-1}{L}\Big)\bigg)^{2}\\ &\longrightarrow\theta^{2}\bar{g}(2) \qquad\mbox{as } k_{M}\rightarrow \infty. \end{aligned}$$

For \(\overline{\overline{\xi}}_{i}(\epsilon)\), we have

$$\begin{aligned} &\sqrt{k_{M}}\overline{\overline{\xi}}_{\ell}(\epsilon)\\ &=\sqrt{k_{M}}\sum_{j=1}^{k_{M}-1}g\Big(\frac{j}{k_{M}}\Big)(\overline {\epsilon}_{\ell+j+1}-\overline{\epsilon}_{\ell+j})\\ &=\sqrt{k_{M}}\sum_{j=1}^{k_{M}-1}g\Big(\frac{j}{k_{M}}\Big) \frac{1}{L}\sum _{k=1}^{L}\epsilon_{N_{\ell+j}+k}-\sqrt{k_{M}}\sum_{j=1}^{k_{M}-1}g\Big(\frac {j}{k_{M}}\Big) \frac{1}{L}\sum_{k=1}^{L}\epsilon_{N_{\ell+j-1}+k}\\ &=-g\Big(\frac{1}{k_{M}}\Big) \frac{\sqrt{k_{M}}}{L}\sum_{k=1}^{L}\epsilon _{N_{\ell}+k}+\frac{\sqrt{k_{M}}}{L}\sum_{j=1}^{k_{M}-1}\sum_{k=1}^{L}\bigg(g\Big(\frac{j}{k_{M}}\Big)-g\Big(\frac{j+1}{k_{M}}\Big)\bigg) \epsilon_{N_{\ell+j}+k}\\ &=o_{p}(1)+\frac{\sqrt{k_{M}}}{L}\sum_{j=1}^{k_{M}-1}\sum_{k=1}^{L}\bigg(g\Big(\frac{j}{k_{M}}\Big)-g\Big(\frac{j+1}{k_{M}}\Big)\bigg)\epsilon_{N_{\ell+j}+k}. \end{aligned}$$

Similarly, by denoting the last term as \(\vartheta_{\ell}^{M}\), we have \(\vartheta_{\ell}^{M}\overset{d}{\rightarrow} B{N}_{2}\) by the Lindeberg–Feller central limit theorem, where \({N}_{2}\) is a standard normal random variable and

$$\begin{aligned} B^{2}:=\lim_{k_{M}\rightarrow\infty}\frac{k_{M}}{L^{2}}\sum_{j=1}^{k_{M}-1}\sum _{k=1}^{L}\bigg(g\Big(\frac{j}{k_{M}}\Big)-g\Big(\frac{j+1}{k_{M}}\Big)\bigg)^{2}\omega^{2}=\frac{\overline{g'}(2)}{L}\omega^{2}. \end{aligned}$$

Following [33] (Lemma 2 and the first paragraph of Theorem 1), we obtain

$$\begin{aligned} E\big[|\sigma_{s_{i}}\overline{\overline{\kappa}}_{i,\ell}(W)+\overline {\overline{\kappa}}_{\ell}(\epsilon)|^{r_{j}}\big|{\mathcal {F}}_{s_{i}}\big]= m_{r_{j}}\bigg(\theta^{2}\bar{g}(2)\sigma^{2}_{s_{i}}+\frac{\overline {g'}(2)}{L}\omega^{2}\bigg)^{\frac{r_{j}}{2}}+o_{p}(1) \end{aligned}$$

uniformly in \(i\). Thus, we obtain

$$\begin{aligned} &\frac{\Delta}{\theta^{2}}\sum_{i=0}^{M-dk_{M}}E\bigg[\prod _{j=1}^{d}|\overline{\overline{\kappa}}_{i,i+(j-1)k_{M}}(Y)|^{r_{j}}\bigg|{\mathcal {F}}_{s_{i}}\bigg]\\ &=\frac{\Delta}{\theta^{2}}\sum_{i=0}^{M-dk_{M}}\prod_{j=1}^{d}E\big[|\sigma _{s_{i}}\overline{\overline{\kappa}}_{i,i+(j-1)k_{M}}(W)+\overline{\overline {\kappa}}_{i+(j-1)k_{M}}(\epsilon)|^{r_{j}}\big|{\mathcal {F}}_{s_{i}}\big]\\ &=\frac{\Delta}{\theta^{2}}\sum_{i=0}^{M-dk_{M}}\prod_{j=1}^{d}m_{r_{j}}\bigg(\theta^{2}\bar{g}(2)\sigma^{2}_{s_{i}}+\frac{\overline{g'}(2)}{L}\omega ^{2}\bigg)^{\frac{r_{j}}{2}}+o_{p}(1)\\ &=\frac{\Delta}{\theta^{2}}\sum_{i=0}^{M-dk_{M}}\bigg(\prod _{j=1}^{d}m_{r_{j}}\bigg)\bigg(\theta^{2}\bar{g}(2)\sigma^{2}_{s_{i}}+\frac {\overline{g'}(2)}{L}\omega^{2}\bigg)+o_{p}(1)\\ &\longrightarrow\bigg(\prod_{j=1}^{d}m_{r_{j}}\bigg)\bigg(\bar{g}(2)\int _{0}^{t}\sigma^{2}_{s}ds+\frac{\overline{g'}(2)}{\theta^{2} L}\omega^{2}t\bigg). \end{aligned}$$

If \((L_{i})_{i \in{\mathbb {N}}}\) is an i.i.d. random sequence taking positive integer values with \(E[\frac{1}{L_{i}}]=\lambda\), then we similarly have \((A^{M})^{2}\rightarrow\theta^{2}\bar{g}(2)\) as \(k_{M}\rightarrow\infty\), and

$$\begin{aligned} \sqrt{k_{M}}\overline{\overline{\xi}}_{\ell}(\epsilon)&=o_{p}(1)+\sqrt {k_{M}}\sum_{j=1}^{k_{M}-1}\bigg(g\Big(\frac{j}{k_{M}}\Big)-g\Big(\frac {j+1}{k_{M}}\Big)\bigg) \bigg(\frac{1}{L_{\ell+j}}\sum_{k=1}^{L_{\ell+j}} \epsilon_{N_{\ell+j}+k}\bigg)\\ &\overset{d}{\longrightarrow} \overline{g'}(2)\lambda\omega^{2}{\mathcal {N}}_{2}. \end{aligned}$$

The result follows. □

Proof of Proposition 5.3

Note that

$$\begin{aligned} \widetilde{\omega^{2}}&=\frac{1}{2M}\bigg(\sum_{i=1}^{M}(\bar{X}_{s_{i}}-\bar {X}_{s_{i-1}})^{2}+\sum_{i=1}^{M}(\bar{\epsilon}_{s_{i}}-\bar{\epsilon }_{s_{i-1}})^{2}\\ &\phantom{=:}+\sum_{i=1}^{M}(\bar{X}_{s_{i}}-\bar{X}_{s_{i-1}})(\bar {\epsilon}_{s^{n}_{i}}-\bar{\epsilon}_{s^{n}_{i-1}})\bigg). \end{aligned}$$

First, we have

$$\begin{aligned} \frac{1}{2M}\sum_{i=1}^{M}(\bar{X}_{s_{i}}-\bar{X}_{s_{i-1}})^{2} \leq&\frac {2}{2M}\sum_{i=1}^{M}(\bar{X}^{c}_{s_{i}}-\bar{X}^{c}_{s_{i-1}})^{2}+\frac {2}{2M}\sum_{i=1}^{M}(\bar{X}^{J_{1}}_{s_{i}}-\bar{X}^{J_{1}}_{s_{i-1}})^{2}\\ &+\frac{2}{2M}\sum_{i=1}^{M}(\bar{X}^{J_{2}}_{s_{i}}-\bar{X}^{J_{2}}_{s_{i-1}})^{2}, \end{aligned}$$

where \(dX^{c}_{t}=b_{t}dt+\sigma_{t}dW_{t}\) is the continuous part, the jump martingale part is \(dX^{J_{1}}=\int_{R}h(x)(\mu-\nu)(dx,dt)\), and \(dX^{J_{2}}=\int_{R}(x-h(x))\mu(dx,dt)\) is the big jump part. The estimates \(E[(\bar{X}^{c}_{s_{i}}-\bar{X}^{c}_{s_{i-1}})^{2}]\leq K\Delta\) and \(E[(\bar{X}^{J_{1}}_{s_{i}}-\bar{X}^{J_{s}}_{s_{i-1}})^{2}]\leq K\Delta\) yield \(\frac{2}{2M}\sum_{i=1}^{M}(\bar{X}^{c}_{s_{i}}-\bar {X}^{c}_{s_{i-1}})^{2}\xrightarrow{\,\,P\,\,}0\) and \(\frac{2}{2M}\sum_{i=1}^{M}(\bar{X}^{J_{1}}_{s_{i}}-\bar {X}^{J_{1}}_{s_{i-1}})^{2}\xrightarrow{P}0\), respectively. Since \(h(x)=x\) near zero, the third summation contains only finitely many nonzero terms as \(M\rightarrow\infty\). Therefore, \(\frac{2}{2M}\sum _{i=1}^{M}(\bar{X}^{J_{2}}_{s_{i}}-\bar{X}^{J_{2}}_{s_{i-1}})^{2}\xrightarrow{P}0\).

Second, by the Cauchy–Schwarz inequality and Assumption 5.1, we obtain

$$\begin{aligned} &\frac{1}{2M}\sum_{i=1}^{M}(\bar{X}_{s_{i}}-\bar{X}_{s_{i-1}})(\bar {\epsilon}_{s_{i}}-\bar{\epsilon}_{s_{i-1}})\xrightarrow{\,\,P\,\,}0. \end{aligned}$$

Third, note that

$$\begin{aligned} \frac{1}{2M}\sum_{i=1}^{M}(\bar{\epsilon}_{s_{i}}-\bar{\epsilon }_{s_{i-1}})^{2}=\frac{1}{2M}\sum_{i=1}^{M}\bar{\epsilon}_{s_{i}}^{2} +\frac{1}{2M}\sum_{i=1}^{M}\bar{\epsilon}_{s_{i-1}}^{2}+\frac{1}{M}\sum _{i=1}^{M}\bar{\epsilon}_{s_{i-1}}\bar{\epsilon}_{s_{i}}. \end{aligned}$$

Since \(E[\bar{\epsilon}_{s_{i}}^{2}]=\lambda\omega^{2}\) and \(E[\bar{\epsilon }_{s^{n}_{i-1}}\bar{\epsilon}_{s^{n}_{i-1}}]=0\), we obtain

$$\begin{aligned} \frac{1}{2M}\sum_{i=1}^{M}(\bar{\epsilon}_{s_{i}}-\bar{\epsilon }_{s_{i-1}})^{2}\xrightarrow{\,\,P\,\,}\lambda\omega^{2} \end{aligned}$$

by the law of large numbers. □

Proof of Theorem 5.5

Let

$$\begin{aligned} \zeta_{i}:=\frac{1}{\bar{g}(2)}\bigg(\frac{\theta^{2}\prod_{j=1}^{d}|\Delta _{iK_{M}+(j-1)k_{M},k_{M}}\overline{X(\epsilon)}|^{r_{j}}}{\prod_{k=1}^{d} m_{r_{k}}}-\frac{\overline{g'}(2)\widetilde{\omega^{2}}t}{(\lfloor\frac {M}{k_{M}}\rfloor-d)}\bigg). \end{aligned}$$

By using the big-small-blocks technique, we have

$$\begin{aligned} &a_{i}(p)=i(p+d)k_{M},\qquad b_{i}(p)=a_{i}(p)+pk_{M},\\ &A_{i}(p)=\{k\in{\mathbb {N}}: a_{i}(p)\leq k< b_{i}(p)\},\\ &B_{i}(p)=\{k\in{\mathbb {N}}: b_{i}(p)\leq k< a_{i+1}(p)\},\\ &i_{M}(p)=\max\{i: b_{i}(p)\leq M-dk_{M}\}=\bigg\lfloor \frac {M-dk_{M}}{(p+d)k_{M}}\bigg\rfloor -1,\\ &j_{M}(p)=b_{i_{M}(p)}(p). \end{aligned}$$

Further, let

$$\begin{aligned} \hat{\zeta}_{i}&:=\frac{1}{\bar{g}(2)}\bigg(\frac{\theta^{2}\prod _{j=1}^{d}|\sigma_{s_{a_{i}(p)}}\Delta_{iK_{M}+(j-1)k_{M},k_{M}}\overline {W}+\Delta_{iK_{M}+(j-1)k_{M},k_{M}}\overline{\epsilon}|^{r_{j}}}{\prod_{k=1}^{d} m_{r_{k}}}\\ &\phantom{=:\frac{1}{\bar{g}(2)}\bigg(}-\frac{\overline{g'}(2)\omega ^{2}\lambda t}{(\lfloor\frac{M}{k_{M}}\rfloor-d)}\bigg) \end{aligned}$$

and \(\varsigma_{i}(p)=\sum_{l=a_{i}(p)}^{b_{i}(p)-1}(\zeta_{i}-E[\zeta _{i}|{\mathcal {F}}_{s_{a_{i}(p)}}])\). Similarly to [33], we can show that

$$\begin{aligned} \sqrt{k_{M}}\bigg(V_{M}-\sum_{j=0}^{i_{M}(p)}\varsigma_{i}(p)\bigg)\xrightarrow {\,\,P\,\,}0. \end{aligned}$$

We compute the asymptotic variance. In view of Assumption 5.4, we can obtain a similar result as that in Lemma 4 of [33]. Thus, we have

$$\begin{aligned} &E\big[(\zeta_{\ell}-E[\zeta_{\ell}|{\mathcal {F}}_{s_{a_{i}(p)}}])^{2}\big|{\mathcal {F}}_{s_{a_{i}(p)}}\big]\\ &=E\big[(\hat{\zeta}_{\ell}-E[\hat{\zeta}_{\ell}|{\mathcal {F}}_{s_{a_{i}(p)}}])^{2}\big|{\mathcal {F}}_{s_{a_{i}(p)}}\big]+o_{p}(\Delta^{\frac{3}{2}})\\ &=\frac{1}{k^{2}_{M}(\bar{g}(2))^{2}}E\bigg[\bigg(\frac{\theta^{2}\prod _{j=1}^{d}|\sigma_{s_{a_{i}(p)}}\Delta_{iK_{M}+(j-1)k_{M},k_{M}}\overline {W}+\Delta_{iK_{M}+(j-1)k_{M},k_{M}}\overline{\epsilon}|^{r_{j}}}{\prod_{k=1}^{d} m_{r_{k}}}\\ &\phantom{=:\frac{1}{k^{2}_{M}(\bar{g}(2))^{2}}E\bigg[}-\theta^{2}\big(\sigma ^{2}_{s_{a_{i}(p)}}\theta^{2}\bar{g}(2)+\overline{g'}(2)\lambda\omega^{2}\big)\bigg)^{2}\bigg|{\mathcal {F}}_{s_{a_{i}(p)}}\bigg]+o_{p}(\Delta^{\frac{3}{2}})\\ &=\frac{\theta^{4}}{k^{2}_{M}(\bar{g}(2))^{2}}\Bigg(\bigg(\frac{\prod _{j=1}^{d}m_{2r_{j}}}{\prod_{j=1}^{d} m_{r_{j}}}-1\bigg)\big(\sigma ^{2}_{s_{a_{i}(p)}}\theta^{2}\bar{g}(2)+\overline{g'}(2)\lambda\omega^{2}\big)^{2}\Bigg)+o_{p}(\Delta^{\frac{3}{2}}). \end{aligned}$$

For \(1\leq\ell-r< d\), we have

$$\begin{aligned} &E\big[(\zeta_{\ell}-E[\zeta_{\ell}|{\mathcal {F}}_{s_{a_{i}(p)}}])(\zeta _{r}-E[\zeta_{r}|{\mathcal {F}}_{s_{a_{i}(p)}}])\big|{\mathcal {F}}_{s_{a_{i}(p)}}\big]\\ &=\frac{\theta^{4}}{k^{2}_{M}(\bar{g}(2))^{2}} \Bigg(\bigg(\prod_{j=1}^{d-(\ell -r)}\frac{m_{r_{j}+r_{j+\ell-r}}}{m_{j_{k}}m_{r_{j+\ell-r}}}-1\bigg) \big(\sigma^{2}_{s_{a_{i}(p)}}\theta^{2}\bar{g}(2)+\overline{g'}(2)\lambda \omega^{2}\big)^{2}\Bigg)\\ &\phantom{=}+o_{p}(\Delta^{\frac{3}{2}}), \end{aligned}$$

and the (conditional) covariances are zero when \(\ell-r\geq d\). Thus,

$$\begin{aligned} &k_{M}\sum_{i=0}^{i_{M}(p)}E[\varsigma^{2}_{i}(p)|{\mathcal {F}}_{s_{a_{i}(p)}}]\\ &=\frac{\theta^{4}}{k_{M}(\bar{g}(2))^{2}}\sum_{i=0}^{i_{M}(p)}\Bigg(p\bigg(\prod_{k=1}^{d}\frac{m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg)+2\sum _{k=1}^{d-1}(p-k)\bigg(\prod_{j=1}^{d-k} \frac{m_{r_{j}+r_{k+j}}}{m_{r_{j}}m_{r_{k+j}}}-1\bigg)\Bigg)\\ &\phantom{=:\frac{\theta^{4}}{k_{M}(\bar{g}(2))^{2}}\sum_{i=0}^{i_{M}(p)}}\times \big(\sigma^{2}_{s_{a_{i}(p)}}\theta^{2}\bar{g}(2)+\overline{g'}(2)\lambda \omega^{2}\big)^{2}+o_{p}(\Delta^{\frac{1}{2}})\\ &\longrightarrow\frac{1}{p+d}\Bigg(p\bigg(\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg)+2\sum_{k=1}^{d-1}(p-k)\bigg(\prod _{j=1}^{d-k}\frac{m_{r_{j}+r_{k+j}}}{m_{r_{j}}m_{r_{k+j}}}-1\bigg)\Bigg)\\ &\phantom{::\longrightarrow}\times\Bigg(\theta^{6}\int_{0}^{t}\sigma _{s}^{4}ds+2\theta^{4}\frac{\overline{g'}(2)\lambda\omega^{2}}{\bar{g}(2)}\int _{0}^{t}\sigma_{s}^{2}ds+\bigg(\frac{\theta\lambda\omega^{2}\overline{g'}(2)}{\bar {g}(2)}\bigg)^{2}t\Bigg). \end{aligned}$$

Denoting the limit as \(\gamma(p)\), we observe that

$$\begin{aligned} \lim_{p\rightarrow\infty}\gamma(p)&=\Bigg(\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}}+2\sum_{k=1}^{d-1}\bigg(\prod _{j=1}^{d-k}\frac{m_{r_{j}+r_{k+j}}}{m_{r_{j}}m_{r_{k+j}}}\bigg)-2d+1\Bigg) \\ &\phantom{=:}\times\Bigg(\theta^{6}\int_{0}^{t}\sigma_{s}^{4}ds+2\theta^{4}\frac {\overline{g'}(2)\lambda\omega^{2}}{\bar{g}(2)}\int_{0}^{t}\sigma_{s}^{2}ds+\bigg(\frac{\theta\lambda\omega^{2}\overline{g'}(2)}{\bar{g}(2)}\bigg)^{2}t\Bigg). \end{aligned}$$

In view of Assumption 5.4, similarly to [33] (Lemma 8), the convergences of (A.5)–(A.7) can be shown. This completes the proof of Theorem 5.5. □

Proof of Proposition 5.7

The proof is similar to that of Theorem 5.2. □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, Z. Jump-robust estimation of volatility with simultaneous presence of microstructure noise and multiple observations. Finance Stoch 21, 427–469 (2017). https://doi.org/10.1007/s00780-017-0325-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00780-017-0325-7

Keywords

Mathematics Subject Classification (2010)

JEL Classification

Navigation