Appendix: Technical proofs
We need some notation to simplify the presentation of our proofs. For the process \(X\), we set
$$\begin{aligned} \alpha_{i} =&\sqrt{\frac{1}{L_{i}}\sum_{k=1}^{L_{i}} \bigg(\frac {k-1}{L_{i}}\bigg)^{2}},\qquad \alpha'_{i}=\sqrt{\frac{1}{L_{i}}\sum _{k=1}^{L_{i}}\bigg(1-\frac{k-1}{L_{i}}\bigg)^{2}} \qquad (\alpha _{0}=\alpha_{0}'=0),\\ \xi_{i} =&\sum_{j=1}^{L_{i}}\bigg(\frac{j-1}{L_{i}}\bigg)\Delta_{i,j}X\qquad (\xi_{0}=0),\\ \xi'_{i} =&\sum_{j=1}^{L_{i}}\bigg(1-\frac{j-1}{L_{i}}\bigg)\Delta _{i,j}X\qquad (\xi'_{0}=0),\\ \kappa_{i,\ell} =&\sigma_{s_{\ell}}\sum_{j=1}^{L_{i}}\bigg(\frac {j-1}{L_{i}}\bigg)\Delta_{i,j}W\qquad (\kappa_{0,0}=0),\\ \kappa'_{i,\ell} =&\sigma_{s_{\ell}}\sum_{j=1}^{L_{i}}\bigg(1-\frac {j-1}{L_{i}}\bigg)\Delta_{i,j}W \qquad(\kappa'_{0,0}=0, \kappa '_{0,1}=\kappa_{1,0}),\\ \mu_{i} =&\xi_{i}+\xi'_{i+1},\\ \theta_{i,\ell} =&\kappa_{i,\ell}+\kappa'_{i+1,\ell}. \end{aligned}$$
From the standard localization procedure, which essentially says that if a convergence holds for semimartingales with bounded coefficients, then it also holds for semimartingales with locally bounded coefficients (details can be found in Lemma 3.4.5 of [26]), there is no loss of generality in imposing the following assumption.
Assumption A.1
The coefficient processes of \(X\) are bounded.
To prove Theorem 3.1, we need to prove that
$$ \sum_{i=0}^{M-2d}\frac{\prod_{k=1}^{d}|\mu_{i+2k-1}|^{r_{k}}}{\gamma _{i,d}}\Delta\xrightarrow{\,\,P\,\,}\int_{0}^{t}\sigma_{s}^{2}ds \qquad\mbox{as} ~M\rightarrow\infty. $$
We start from an auxiliary lemma.
Lemma A.2
From Assumptions
2.2, 2.3, and
A.1
we have
$$\begin{aligned} \sum_{i=0}^{M-2d}\frac{|\prod_{k=1}^{d}|\mu_{i+2k-1}|^{r_{k}}-\prod _{k=1}^{d}|\theta_{i+2k-1,i}|^{r_{k}}|}{\gamma_{i,d}}\Delta\xrightarrow{\,\, P\,\,} 0. \end{aligned}$$
Proof
From the elementary inequality
$$ \bigg|\prod_{k=1}^{d}|A_{k}|-\prod_{k=1}^{d}|B_{k}|\bigg|\leq\sum_{k=1}^{d}\bigg(\prod_{j=1}^{k-1}|B_{j}| \, |A_{k}-B_{k}|\prod_{j=k+1}^{d}|A_{j}|\bigg), $$
Hölder’s inequality (generalized version), and the Burkholder–Davis–Gundy inequality, recalling that \(\Delta:=\frac {t}{M}\), we can show that
$$\begin{aligned} E&\bigg[\sum_{i=0}^{M-2d}\frac{|\prod_{k=1}^{d}|\mu ^{r_{k}}_{i+2k-1}|-\prod_{k=1}^{d}|\theta^{r_{k}}_{i+2k-1,i}||}{\gamma _{i,d}}\Delta\bigg]\\ \leq&E\bigg[\sum_{i=0}^{M-2d} \frac{\Delta}{\gamma_{i,d}} \\ &\phantom{E\bigg[\sum_{i=0}^{M-2d}}\times{\sum_{k=1}^{d}(||\mu ^{r_{k}}_{i+2k-1}|-|\theta^{r_{k}}_{i+2k-1,i}||\prod_{0< j< k}|\theta^{r_{j}}_{i+2j-1,i} |\prod_{k< j< d}|\mu^{r_{j}}_{i+2j-1}|)}\bigg]\\ \leq&\sum_{i=0}^{M-2d}\frac{1}{\gamma_{i,d}}\bigg(\sum_{k=1}^{d}\big(E\big[\big||\mu^{r_{k}}_{i+2k-1}|-|\theta^{r_{k}}_{i+2k-1,i}|\big|^{\frac {2}{r_{k}}}\big]\big)^{\frac{r_{k}}{2}}\\ &\phantom{\sum_{i=0}^{M-2d}\frac{1}{\gamma_{i,d}}\Bigg(\sum _{k=1}^{d}}\times\prod_{0< j< k}\big(E\big[|\theta^{r_{j}}_{i+2j-1,i}|^{\frac {2}{r_{j}}}\big]\big)^{\frac{r_{j}}{2}} \prod_{k< j< d}\big(E\big[|\mu ^{r_{j}}_{i+2j-1}|^{\frac{2}{r_{j}}}\big]\big)^{\frac{r_{j}}{2}}\bigg)\Delta\\ =&\sum_{i=0}^{M-2d}\frac{1}{\gamma_{i,d}}\bigg(\sum_{k=1}^{d}\big(E\big[\big||\mu^{r_{k}}_{i+2k-1}|-|\theta^{r_{k}}_{i+2k-1,i}|\big|^{\frac {2}{r_{k}}}\big]\big)^{\frac{r_{k}}{2}}\\ &\phantom{\sum_{i=0}^{M-2d}\frac{1}{\gamma_{i,d}}\bigg(\sum _{k=1}^{d}}\times\prod_{0< j< k}\big(E\big[|\theta^{2}_{i+2j-1,i}|\big]\big)^{\frac{r_{j}}{2}} \prod_{k< j< d}\big(E\big[|\mu^{2}_{i+2j-1}|\big]\big)^{\frac{r_{j}}{2}}\bigg)\Delta\\ \leq&K_{{\mathbf{r}},d}\sum_{i=0}^{M-2d}\frac{\sum_{k=1}^{d}(\Delta ^{r_{k}}+\Delta^{\frac{r_{k}-1}{2}}\Delta{\mathbf{1}}_{\{r_{k}>1\}})\Delta^{2-\frac {r_{k}}{2}}}{\gamma_{i,d}}\\ \leq&K_{{\mathbf{r}},d}\sum_{i=0}^{M-2d}\frac{\Delta^{2}\sum_{k=1}^{d}(\Delta ^{\frac{r_{k}}{2}}+\Delta^{\frac{1}{2}}{\mathbf{1}}_{\{r_{k}>1\}})}{\gamma _{i,d}}\longrightarrow0 \qquad\mbox{as}~ M\rightarrow\infty. \end{aligned}$$
We have used the elementary inequalities \(||x+y|^{p}-|x|^{p}|\leq K(|y|^{p}+|x|^{p-1}|y|)\) when \(p>1\) and \(||x+y|^{p}-|x|^{p}|\leq|y|^{p}\) when \(p\leq1\). □
Proof of Theorem 3.1
In view of Lemma A.2, it suffices to show that
$$\begin{aligned} \sum_{i=0}^{M-2d}\frac{\prod_{k=1}^{d}|\theta_{i+2k-1,i}|^{r_{k}}}{\gamma _{i,d}}\Delta\xrightarrow{\,\,P\,\,} \int_{0}^{t}\sigma_{s}^{2}ds. \end{aligned}$$
Further, let
$$ \tilde{\theta}_{i}=\frac{\prod_{k=1}^{d}|\theta_{i+2k-1,i}|^{r_{k}}}{\gamma _{i,d}}\quad \text{and} \quad\tilde{\theta}'_{i}=E[\tilde{\theta }_{i}|{\mathcal{F}}_{s_{i}}]. $$
We have
$$\begin{aligned} E[(\tilde{\theta}_{i})^{2}|{\mathcal{F}}_{s_{i}}]\leq K \end{aligned}$$
and in particular \(\tilde{\theta}'_{i}\leq K\) by Assumption A.1. Since \(E[(\tilde{\theta}_{i}-\tilde{\theta}'_{i})(\tilde{\theta}_{j}-\tilde {\theta}'_{j})]=0\) when \(|i-j|\geq2d\), Assumption 2.3 and the Burkholder–Davis–Gundy inequality yield
$$\begin{aligned} E\bigg[\bigg|\sum_{i=0}^{M-2d}(\tilde{\theta}_{i}\Delta-\tilde{\theta }'_{i}\Delta)\bigg|^{2}\bigg] =&E\bigg[\sum_{i=0}^{M-2d}\sum _{j=0}^{M-2d}(\tilde{\theta}_{i}-\tilde{\theta}'_{i})(\tilde{\theta }_{j}-\tilde{\theta}'_{j}) \Delta^{2}\bigg]\\ \leq& K_{d}\sum_{i=0}^{M-2d}E[(\tilde{\theta}_{i}-\tilde{\theta }'_{i})^{2}]\Delta^{2}\leq K\Delta\longrightarrow0. \end{aligned}$$
Thus, to prove the theorem, we only need to show that
$$\begin{aligned} \sum_{i=0}^{M-2d}\tilde{\theta}'_{i}\Delta\xrightarrow{\,\,P\,\,} \int _{0}^{t}\sigma_{s}^{2}ds. \end{aligned}$$
Note that
$$\begin{aligned} \bigg|\sum_{i=0}^{M-2d}\tilde{\theta}'_{i}\Delta-\int_{0}^{t}\sigma_{s}^{2}ds\bigg| \leq&\bigg|\sum_{i=0}^{M-1}\sigma_{s_{i}}^{2}\Delta-\int_{0}^{t}\sigma _{s}^{2}ds\bigg|+\bigg|\sum_{i=M-2d+1}^{M-1}\sigma^{2}_{s_{i}}\Delta\bigg| \\ \leq&K\int_{0}^{t}|\theta_{s}^{(i)}-\sigma_{s}^{2}|ds+K_{d}\Delta, \end{aligned}$$
where \(\theta_{s}^{(i)}=\tilde{\theta}'_{i}=\sigma^{2}_{s_{\max\{i:s_{i}\leq s\} }}\). The required result follows because \((\sigma_{s})_{s \geq0}\) is right-continuous. □
Proof of Theorem 3.4
To derive the central limit theorem, we apply the technique of “big blocks–small blocks” used in [24] and [14]. Big blocks are used to construct independent terms in the summation, and eventually the increments in these big blocks will dominate the asymptotic behavior. However, the small blocks, which are asymptotically negligible, will be removed from the summation. We now give the details. Given a positive integer \(p\), we define
$$\begin{aligned} a_{i}(p) =&2di(p+1),\quad b_{i}(p)=a_{i}(p)+2dp,\\ A_{i}(p) =&\{k\in{\mathbb {N}}: a_{i}(p)\leq k< b_{i}(p)\},\\ B_{i}(p) =&\{k\in{\mathbb {N}}: b_{i}(p)\leq k< a_{i+1}(p)\},\\ i_{M}(p) =&\max\{i: b_{i}(p)\leq M-2d\}=\bigg\lfloor \frac {M-2d}{2d(p+1)}\bigg\rfloor -1,\\ j_{M}(p) =&b_{i_{M}(p)}(p). \end{aligned}$$
So, the \(i\)th big block consists of \(\{\prod_{k=1}^{d}|\mu _{i+2k-1}|^{r_{k}}: i\in A_{i}(p)\}\), whereas the \(i\)th small block contains \(\{\prod_{k=1}^{d}|\mu_{i+2k-1}|^{r_{k}}: i\in B_{i}(p)\} \). Because \(\prod_{k=1}^{d}|\mu_{i+2k-1}|^{r_{k}}\) is a \(2d\)-dependent sequence (conditioning on \(\sigma\)), that is,
$$ E\bigg[\prod_{k=1}^{d}|\mu_{i+2k-1}|^{r_{k}}\prod_{k=1}^{d}|\mu _{j+2k-1}|^{r_{k}}\bigg]=E\bigg[\prod_{k=1}^{d}|\mu_{i+2k-1}|^{r_{k}}\bigg] E\bigg[\prod_{k=1}^{d}|\mu_{j+2k-1}|^{r_{k}}\bigg] $$
when \(|i-j|>2d\), after removing the small blocks, we get an independent sequence. We denote
$$ \mu_{i,m}:=E\bigg[\prod_{k=1}^{d}|\mu_{i+2k-1}|^{r_{k}}\bigg|{\mathcal {F}}_{s_{m}}\bigg],\qquad\tilde{\theta}_{i,m}:=E\bigg[\prod_{k=1}^{d}|\theta _{i+2k-1, m}|^{r_{k}}\bigg|{\mathcal {F}}_{s_{m}}\bigg], $$
and the centralized versions are
$$ \hat{\mu}_{i,m}:=\frac{1}{\gamma_{i,d}}\bigg(\prod_{k=1}^{d}|\mu _{i+2k-1}|^{r_{k}}-\mu_{i,m}\bigg), \qquad\hat{\theta}_{k,m}:=\frac {1}{\gamma_{i,d}}\bigg(\prod_{k=1}^{d}|\theta_{i+2k-1. m}|^{r_{k}}-\tilde {\theta}_{i,m}\bigg). $$
For the different kinds of blocks, we use the approximations
$$ \bar{\mu}_{k}= \left\{ \textstyle\begin{array}{ll} \hat{\mu}_{k,a_{i}(p)},\quad& k\in A_{i}(p),\\ \hat{\mu}_{k,b_{i}(p)},& k\in B_{i}(p),\\ \hat{\mu}_{k,j_{M}(p)},& k\geq j_{M}(p), \end{array}\displaystyle \right.~~~~~ \bar{\theta}_{k}= \left\{ \textstyle\begin{array}{ll} \hat{\theta}_{k,a_{i}(p)},\quad& k\in A_{i}(p),\\ \hat{\theta}_{k,b_{i}(p)},& k\in B_{i}(p),\\ \hat{\theta}_{k,j_{M}(p)},& k\geq j_{M}(p). \end{array}\displaystyle \right. $$
(A.1)
Gathering all the terms \(\bar{\mu}_{k}\) for \(k\in A_{i}(p)\) and for \(k\in B_{i}(p)\), respectively, we define
$$\begin{aligned} \varsigma_{i}(p,1)=\sum_{\ell=a_{i}(p)}^{b_{i}(p)-1}\bar{\mu}_{\ell}\Delta ,\qquad\varsigma_{i}(p,2):=\sum_{\ell=b_{i}(p)}^{a_{i+1}(p)-1}\bar{\mu }_{\ell}\Delta. \end{aligned}$$
Note that \(\varsigma_{i}(p,1)\) contains \(2dp\) summands (“big”), whereas \(\varsigma_{i}(p,2)\) contains \(2d\) summands (“small”), and because we finally let \(p\rightarrow\infty\), the small blocks are asymptotically negligible. To realize this, we set
$$\begin{aligned} N(p)=\sum_{j=0}^{i_{M}(p)}\varsigma_{j}(p,1),\qquad\tilde{N}(p)=\sum _{j=0}^{i_{M}(p)}\varsigma_{j}(p,2),\qquad C(p)=\sum_{j=j_{M}(p)}^{M-2d}\bar {\mu}_{j}\Delta. \end{aligned}$$
We then obtain
$$ {\mathrm{MPV}}(X, {\mathbf{r}})-\int_{0}^{t}\sigma_{s}^{2}ds = N(p)+\tilde{N}(p)+C(p)+R_{1}(p)+R_{2}(p), $$
(A.2)
where
$$\begin{aligned} R_{1}(p)=\sum_{i=0}^{M-2d}\frac{\bar{\mu}_{i}-\bar{\theta}_{i}}{\gamma _{i,d}}\Delta, \end{aligned}$$
and \(R_{2}(p)\) is from the error of the Riemann approximation, that is,
$$\begin{aligned} R_{2}(p)=\sum_{i=0}^{M-2d}\bar{\theta}_{i}\Delta-\int_{0}^{t}\sigma_{s}^{2}ds. \end{aligned}$$
Note that \(E[|\bar{\mu}_{j}|]\leq K\); then, \(E[|C(p)|]\leq K(p+1)\Delta\). Similarly, we can show that \(E[|N(p)|]\leq\frac{M}{2d(p+1)} K\Delta \leq\frac{K_{d}}{p+1}\). Moreover,
$$\begin{aligned} R_{2}(p) =&\sum_{i=1}^{i_{M}(p)}\bigg(\sum_{j\in A_{i}(p)}\sigma _{a_{i}(p)}^{2}-\int_{s_{a_{i}(p)}}^{s_{b_{i}(p)}}\sigma_{s}^{2}ds\bigg)\\ &+\sum_{i=1}^{i_{M}(p)}\bigg(\sum_{j\in B_{i}(p)}\sigma_{b_{i}(p)}^{2}-\int _{s_{b_{i}(p)}}^{s_{a_{i+1}(p)}}\sigma_{s}^{2}ds\bigg)+\sum _{j=j_{M}(p)}^{M-2d}\bigg(\sigma_{s_{j}}^{2}-\int_{s_{j}}^{s_{j+1}}\sigma _{s}^{2}ds\bigg)\\ =:&I_{M}+\mathit{II}_{M}+\mathit{III}_{M}, \end{aligned}$$
and \(E[|\mathit{III}_{M}|]\leq(p+1)\sqrt{\Delta} \Delta\) and \(E[|\mathit{II}_{M}|]\leq\frac {K\sqrt{\Delta}}{p+1}\). The arguments for \(I_{M}\xrightarrow{P}0\) and \(R_{1}(p)\xrightarrow{P}0\) are similar to the proof of [9, Eqs. (7.2) and (7.1)], respectively. Therefore, we have
$$ \lim_{p\rightarrow\infty}\limsup_{M\rightarrow\infty}P\big[\sqrt{M}\big(|\tilde{N}(p)|+|C(p)|+|R_{1}(p)|+|R_{2}(p)|\big)>\delta\big]=0 $$
(A.3)
for any \(\delta>0\). We can show (see Lemma A.3) that \(\sqrt {M}N(p)\xrightarrow{{\mathcal{L}}-(s)}U(p)\) for any fixed \(p\) and \(U(p)\xrightarrow{P} \int_{0}^{t}\gamma_{s}dB_{s}\). By combining this with (A.2) and (A.3) we can obtain the required result of Theorem 3.4. □
Lemma A.3
Suppose that
\(X\)
is a one-dimensional Itô semimartingale with representation (2.1) for which Assumptions
2.2, 3.2, and
A.1
are satisfied. Suppose also that Assumption
2.3
on the observation scheme holds with
\(\Delta_{i}\equiv\frac {t}{M}\), and let
\(\bar{r}=2\). Moreover, let
\(p\)
be a fixed positive integer.
-
1)
If
\(L_{i}\equiv L\), then we have
$$ \sqrt{M}N(p)\xrightarrow{{\mathcal{L}}-(s)}\int_{0}^{t}\gamma(p)_{s}dB_{s} \qquad\textit{as } M\rightarrow\infty, $$
where
\(B\)
is a standard Brownian motion (defined on an extension of
\(\varOmega\)) independent of ℱ, and
\(\gamma(p)\)
is given by
$$\begin{aligned} (\gamma(p)_{s})^{2} =&\Bigg(\frac{p}{p+1}\bigg(\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg)\\ &{}+2\sum_{j=1}^{2d-1}\frac{2dp-j}{2d(p+1)}\bigg(\frac{\beta_{j}}{(\alpha ^{2}+(\alpha')^{2})^{2} \prod_{k=1}^{d}m_{r_{k}}^{2}}-1\bigg)\Bigg)\sigma_{s}^{4}, \end{aligned}$$
where
\(\beta_{j}\)
is defined in Theorem
3.4.
-
2)
In general, if the
\(L_{i}\)
satisfy Assumption
3.3, then we have
$$ \sqrt{M}\bigg({\mathrm{MPV}}(X, {\mathbf{r}})-\int_{0}^{t}\sigma_{s}^{2}ds\bigg)\xrightarrow{{\mathcal{L}}\hbox{-}(s)}\int_{0}^{t}\phi_{s}dB_{s} \qquad\textit {as} ~M\rightarrow\infty, $$
where
\(B\)
is a standard Brownian motion (defined on an extension of
\(\varOmega\)) independent of ℱ, and
\(\gamma(p)\)
is given by
$$\begin{aligned} \gamma(p)_{s}^{2} =&\frac{1}{2d(p+1)}\bigg(2dp\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}} +2\sum_{k=1}^{d-1}(2dp-2k)\prod _{j=1}^{d-k}\frac{m_{r_{j}+r_{k+j}}}{m_{r_{j}}m_{r_{k+j}}}\\ &\phantom{\frac{1}{2d(p+1)}\bigg\{ } +2d(p+2d-4dp-1)+Q'_{d,p}(s)\bigg)\sigma_{s}^{2}, \end{aligned}$$
where
\(Q_{d,p}'(s)\)
is given in Assumption
3.3.
Proof
1) Since \(L_{i}\equiv L\) and \(\Delta_{i}\equiv\Delta\), we get \(\alpha_{i}\equiv\alpha\) and \(\alpha'_{i}\equiv\alpha'\). By a martingale central limit theorem argument as presented in [27, Thm. IX.7.28], we need to verify the conditions
$$\begin{aligned} M\sum_{i=0}^{i_{M}(p)}E[\varsigma^{2}_{i}(p,1)|{\mathcal {F}}_{s_{a_{i}(p)}}] \xrightarrow{\,\,P\,\,}&\int_{0}^{t}\big(\gamma(p)\big)^{2}ds, \end{aligned}$$
(A.4)
$$\begin{aligned} M^{2}\sum_{i=0}^{i_{M}(p)}E[\varsigma^{4}_{i}(p,1)|{\mathcal {F}}_{s_{a_{i}(p)}}] \xrightarrow{\,\,P\,\,}& 0, \end{aligned}$$
(A.5)
$$\begin{aligned} \sqrt{M}\sum_{i=0}^{i_{M}(p)}E[\varsigma_{i}(p,1)\Delta W(p)_{i}|{\mathcal {F}}_{s_{a_{i}(p)}}] \xrightarrow{\,\,P\,\,}& 0, \end{aligned}$$
(A.6)
$$\begin{aligned} \sqrt{M}\sum_{i=0}^{i_{M}(p)}E[\varsigma_{i}(p,1)\Delta N(p)_{i}|{\mathcal {F}}_{s_{a_{i}(p)}}] \xrightarrow{\,\,P\,\,}& 0, \end{aligned}$$
(A.7)
where \(\Delta V(p)_{i}=V_{s_{b_{i}(p)-1}}-V_{s_{a_{i}(p)-1}}\) for any process \(V\), and \(N\) in (A.7) is any bounded martingale orthogonal to \(W\). Direct calculations show (A.5). Since \(\varsigma_{i}(p,1)\) is even as a function of \(W\), we have
$$\begin{aligned} E[\varsigma_{i}(p,1)\Delta W(p)_{i}|{\mathcal {F}}_{s_{a_{i}(p)}}]=0, \end{aligned}$$
and we deduce (A.6). The proof of (A.7) is the same as [24, Lemma 5.7] or [9]. We hence are left to prove (A.4). We have \(\Delta_{i}\equiv\Delta\), \(\alpha_{i}\equiv \alpha\), and \(\alpha'_{i}\equiv\alpha'\), thus
$$\begin{aligned} \gamma_{i,d}\equiv\big(\alpha^{2}+(\alpha')^{2}\big)\prod _{k=1}^{d}m_{r_{k}}=:\gamma_{d}. \end{aligned}$$
Hence, from (A.1), when \(\ell\in A_{i}(p)\), we have
$$\begin{aligned} E[\bar{\theta}_{\ell}^{2}|{\mathcal {F}}_{s_{a_{i}(p)}}] =&\sigma ^{4}_{s_{a_{i}(p)}}\bigg(\frac{(\alpha^{2}+(\alpha')^{2})^{2} \prod _{k=1}^{d}m_{2r_{k}}}{\gamma_{d}^{2}}-1\bigg) =\sigma^{4}_{s_{a_{i}(p)}}\bigg(\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg), \\ E[\bar{\theta}_{\ell}\bar{\theta}_{r}|{\mathcal {F}}_{s_{a_{i}(p)}}] =& \sigma^{4}_{s_{a_{i}(p)}}\bigg(\frac{1}{\gamma_{d}^{2}}{\beta_{|\ell -r|}}-1\bigg) =\sigma^{4}_{s_{a_{i}(p)}}\bigg(\frac{\beta_{|\ell-r|}}{(\alpha^{2}+(\alpha ')^{2})^{2}\prod_{k=1}^{d}m_{r_{k}}^{2}}-1\bigg) \end{aligned}$$
when \(|\ell-r|<2d\), where
$$ \beta_{|\ell-r|}:=E\bigg[\prod_{k=1}^{d}|\alpha{N}_{2k-1}+\alpha '{N}_{2k}|^{r_{k}}|\alpha{N}_{2k-1+|\ell-r|}+\alpha'{N}_{2k+|\ell -r|}|^{r_{k}}\bigg] $$
with a sequence of standard normal random variables \((N_{i})_{i \in{\mathbb {N}}}\). The cross moment between \(\bar{\theta}_{\ell}\) and \(\bar{\theta}_{r}\) vanishes when \(|\ell-r|\geq2d\). Therefore, we obtain
$$\begin{aligned} &E\bigg[\bigg(\sum_{\ell=a_{i}(p)}^{b_{i}(p)-1}\bar{\theta}_{\ell}\bigg)^{2}\Delta^{2}\bigg|{\mathcal {F}}_{s_{a_{i}(p)}}\bigg]\\ &=E\bigg[\sum_{\ell,r=a_{i}(p)}^{b_{i}(p)-1}\bar{\theta}_{\ell}\bar{\theta }_{r}\Delta^{2}\bigg|{\mathcal {F}}_{s_{a_{i}(p)}}\bigg]\\ &=\Delta^{2}\sum_{\ell=a_{i}(p)}^{b_{i}(p)-1}E[\bar{\theta}_{\ell }^{2}|{\mathcal {F}}_{s_{a_{i}(p)}}]+\Delta^{2}\sum_{\ell\neq r, \ell ,r=a_{i}(p)}^{b_{i}(p)-1}E[\bar{\theta}_{\ell}\bar{\theta}_{r}|{\mathcal {F}}_{s_{a_{i}(p)}}]\\ &= 2dp\bigg(\prod_{k=1}^{d}\frac{m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg)\sigma ^{4}_{s_{a_{i}(p)}}\Delta^{2}\\ &\phantom{=:}+2\sum_{j=1}^{2d-1}(2dp-j)\bigg(\frac{\beta_{j}}{(\alpha^{2}+(\alpha')^{2})^{2}\prod_{k=1}^{d}m_{r_{k}}^{2}}-1\bigg)\sigma ^{4}_{s_{a_{i}(p)}}\Delta^{2}\\ &=\Bigg(2dp\bigg(\prod_{k=1}^{d}\frac{m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg)+2\sum _{j=1}^{2d-1}(2dp-j)\\ &\phantom{=:\bigg\{ 2dp\bigg(\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg)+2\sum_{j=1}^{2d-1}}\times\bigg(\frac{\beta _{j}}{(\alpha^{2}+(\alpha')^{2})^{2}\prod_{k=1}^{d}m_{r_{k}}^{2}}-1\bigg)\Bigg)\sigma^{4}_{s_{a_{i}(p)}}\Delta^{2}. \end{aligned}$$
Observing that
$$\begin{aligned} E[\varsigma^{2}_{i}(p,1)|{\mathcal {F}}_{s_{a_{i}(p)}}] =&E\bigg[\bigg(\sum _{\ell=a_{i}(p)}^{b_{i}(p)-1}\bar{\mu}_{\ell}\bigg)^{2}\Delta^{2}\bigg|{\mathcal {F}}_{s_{a_{i}(p)}}\bigg]\\ =&E\bigg[\bigg(\sum_{\ell=a_{i}(p)}^{b_{i}(p)-1}\bar{\theta}_{\ell}\bigg)^{2}\Delta^{2}\bigg|{\mathcal {F}}_{s_{a_{i}(p)}}\bigg]+O_{p}(\Delta^{5/2}), \end{aligned}$$
we have
$$\begin{aligned} M\sum_{i=0}^{i_{M}(p)}E[\varsigma^{2}_{i}(p,1)|{\mathcal {F}}_{s_{a_{i}(p)}}]\xrightarrow{\,\,P\,\,}\int_{0}^{t}\big(\gamma(p)_{s}\big)^{2}ds, \end{aligned}$$
where
$$\begin{aligned} \big(\gamma(p)_{s}\big)^{2} :=&\frac{1}{2d(p+1)}\Bigg(2dp\bigg(\prod _{k=1}^{d}\frac{m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg)\\ &\phantom{\frac{1}{2d(p+1)}\Big(}+2\sum_{j=1}^{2d-1}(2dp-j)\bigg(\frac {\beta_{j}}{(\alpha^{2}+(\alpha')^{2})^{2}\prod_{k=1}^{d}m_{r_{k}}^{2}}-1\bigg)\Bigg)\sigma_{s}^{4}. \end{aligned}$$
This completes the proof of the first part of Lemma A.3.
2) For the general case, by repeating the procedure it suffices to rederive the asymptotic variance. If \(\ell\in A_{i}(p)\), then we let
$$ I_{\ell,r}=\left\{ \textstyle\begin{array}{ll} 1 &\quad \hbox{if $\ell-r$ is odd,} \\ 0 & \quad\hbox{if $\ell-r$ is even,} \end{array}\displaystyle \right. $$
when \(1\leq\ell-r\leq2d-1\). Now, we have
$$\begin{aligned} E[\bar{\theta}_{\ell}^{2}|{\mathcal {F}}_{s_{a_{i}(p)}}] =&\sigma ^{4}_{s_{a_{i}(p)}}\bigg(\frac{\prod_{k=1}^{d}(\alpha_{\ell+2k-1}^{2}+(\alpha '_{\ell+2k})^{2})^{r_{k}}m_{2r_{k}}}{\prod_{k=1}^{d}(\alpha_{\ell+2k-1}^{2} +(\alpha'_{\ell+2k})^{2})^{r_{k}}m^{2}_{r_{k}}}-1\bigg)\\ =&\sigma^{4}_{s_{a_{i}(p)}}\bigg(\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg), \\ E[\bar{\theta}_{\ell}\bar{\theta}_{r}|{\mathcal {F}}_{s_{a_{i}(p)}}] =& \left\{ \textstyle\begin{array}{ll} \sigma^{4}_{s_{a_{i}(p)}}\bigg(\prod_{k=1}^{d-\frac{\ell-r}{2}}\frac {m_{r_{k}+r_{k+\frac{\ell-r}{2}}}}{m_{r_{k}}m_{r_{k+\frac{\ell -r}{2}}}}-1\bigg) & \quad\hbox{for even $\ell-r$,} \\ \sigma^{4}_{s_{a_{i}(p)}}\bigg(\frac{E[\prod_{k=1}^{d-\frac{\ell -r-1}{2}}|\mathcal{A}_{k,\ell}|^{r_{k+\frac{\ell-r-1}{2}}}|\mathcal {B}_{k,\ell}|^{r_{k}}]}{\prod_{k=1}^{d-\frac{\ell-r-1}{2}}m_{r_{k}}m_{r_{k+\frac{\ell -r-1}{2}}}}-1\bigg) &\quad \hbox{for odd $\ell-r$,} \end{array}\displaystyle \right. \end{aligned}$$
where \(\mathcal{A}_{k,\ell}\sim{\mathcal {N}}(0,1)\), \(\mathcal{B}_{k,\ell }\sim{\mathcal {N}}(0,1)\) are i.i.d. for \(k=1,2,\dots\), and
$$\begin{aligned} {\mathrm{Cov}}({\mathcal {A}}_{k,\ell}, {\mathcal {B}}_{k,\ell})&=\alpha _{\ell+2k-1}\alpha'_{\ell+2k-1}, \\ {\mathrm{Cov}}({\mathcal {A}}_{k+1,\ell}, {\mathcal {B}}_{k,\ell})&=\alpha _{\ell+2k+1}\alpha'_{\ell+2k+1}, \\ {\mathrm{Cov}}({\mathcal {A}}_{j,\ell}, {\mathcal {B}}_{k,\ell})&= 0\quad \hbox{if}~ j>k+1. \end{aligned}$$
The cross moment between \(\theta{\mu}_{\ell}\) and \(\bar{\theta}_{r}\) vanishes when \(|\ell-r|\geq2d\). Thus,
$$\begin{aligned} E&\bigg[\bigg(\sum_{\ell=a_{i}(p)}^{b_{i}(p)-1}\bar{\theta}_{\ell}\bigg)^{2}\Delta^{2}\bigg|{\mathcal {F}}_{s_{a_{i}(p)}}\bigg]\\ =&E\bigg[\sum_{\ell,r=a_{i}(p)}^{b_{i}(p)-1}\bar{\theta}_{\ell}\bar{\theta }_{r}\Delta^{2}\bigg|{\mathcal {F}}_{s_{a_{i}(p)}}\bigg]\\ =&\sum_{\ell,r=a_{i}(p)}^{b_{i}(p)-1}E[\bar{\theta}_{\ell}\bar{\theta }_{r}\Delta^{2}|{\mathcal {F}}_{s_{a_{i}(p)}}]\\ =&\sum_{\ell=a_{i}(p)}^{b_{i}(p)-1}E[\bar{\theta}_{\ell}^{2}|{\mathcal {F}}_{s_{a_{i}(p)}}]\Delta^{2}+\sum_{\ell\neq r, \ell ,r=a_{i}(p)}^{b_{i}(p)-1}E[\bar{\theta}_{\ell}\bar{\theta}_{r}|{\mathcal {F}}_{s_{a_{i}(p)}}]\Delta^{2}\\ =&2dp\bigg(\prod_{k=1}^{d}\frac{m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg)\sigma ^{4}_{s_{a_{i}(p)}}\Delta^{2}\\ &{}+2\sum_{r=a_{i}(p)}^{b_{i}(p)-1}\sum_{k=1}^{\min\{2d-1, b_{i}(p)-1-r\} }E[\bar{\theta}_{r}\bar{\theta}_{r+k}|{\mathcal {F}}_{s_{a_{i}(p)}}]\Delta ^{2}\\ =&\sigma^{4}_{s_{a_{i}(p)}}\Delta^{2}\Bigg(2dp\bigg(\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg)+2\sum_{k=1}^{d-1}(2dp-2k)\bigg(\prod _{j=1}^{d-k} \frac{m_{r_{j}+r_{k+j}}}{m_{r_{j}}m_{r_{k+j}}}-1\bigg)\\ &{}+2\sum_{r=a_{i}(p)}^{b_{i}(p)-1}\sum_{k=1}^{\lfloor\frac {b_{i}(p)-1-r}{2}\rfloor\wedge d}\bigg(\frac{E[\prod _{j=1}^{d-k+1}|{\mathcal {A}}_{k,r+2k-1}|^{r_{j+k-1}}|{\mathcal {B}}_{k,r+2k-1}|^{r_{j}}]}{\prod_{j=1}^{d-k+1}m_{r_{j}}m_{r_{j+k-1}}}-1\bigg)\Bigg)\\ =&\sigma^{4}_{s_{a_{i}(p)}}\Delta^{2}\Bigg(2dp\bigg(\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg)+2\sum_{k=1}^{d-1}(2dp-2k)\bigg(\prod _{j=1}^{d-k} \frac{m_{r_{j}+r_{k+j}}}{m_{r_{j}}m_{r_{k+j}}}-1\bigg)\\ &\phantom{\sigma^{4}_{s_{a_{i}(p)}}\Delta^{2}\Bigg(}{}+2\sum _{r=a_{i}(p)}^{b_{i}(p)-1}\sum_{k=1}^{\lfloor\frac{b_{i}(p)-1-r}{2}\rfloor \wedge d}\bigg(\frac{f_{d,k,r}}{g_{d,k}}-1\bigg)\Bigg), \end{aligned}$$
where the first term is the collection of \(E[(\bar{\theta }_{r})^{2}|{\mathcal {F}}_{s_{a_{i}(p)}}]\), the second term is the collection of \(E[\bar{\theta}_{\ell}\bar{\theta}_{r}|{\mathcal {F}}_{s_{a_{i}(p)}}]\) when \(|\ell-r|\) is even, and the third term is the collection of \(E[\bar {\theta}_{\ell}\bar{\theta}_{r}|{\mathcal {F}}_{s_{a_{i}(p)}}]\) when \(|\ell -r|\) is odd, \(\ell,r\in A_{i}(p)\). In view of Assumption 3.3, we obtain
$$ M\sum_{i=0}^{i_{M}(p)}E[\varsigma^{2}_{i}(p,1)|{\mathcal {F}}_{s_{a_{i}(p)}}]\xrightarrow{\,\,P\,\,}\int_{0}^{t}\gamma(p)_{s}^{2}ds, $$
where
$$\begin{aligned} \gamma(p)_{s}^{2} =&\frac{1}{2d(p+1)}\bigg(2dp\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}} +2\sum_{k=1}^{d-1}(2dp-2k)\prod _{j=1}^{d-k}\frac{m_{r_{j}+r_{k+j}}}{m_{r_{j}}m_{r_{k+j}}}\\ & \phantom{\frac{1}{2d(p+1)}\Big(}{} +2d(p+2d-4dp-1)+Q'_{d,p}(s)\bigg)\sigma_{s}^{2}. \end{aligned}$$
We thus complete the proof of Lemma A.3. □
We now return to the proof of Theorem 3.4. In view of
$$\begin{aligned} \gamma(p)_{s}^{2}\longrightarrow\gamma_{s}^{2}\qquad \text{as $p\rightarrow \infty$} \end{aligned}$$
for both cases, we obtain the required proof of the theorem. □
Proof of Theorem 3.5
It suffices for us to derive the form of the “asymptotic variance.” Denoting now
$$\begin{aligned} a_{i}(p)=4di(p+1),\qquad b_{i}(p)=a_{i}(p)+4dp,\qquad i_{M}(p)=\bigg\lfloor \frac{M-2d}{4d(p+1)}\bigg\rfloor -1 \end{aligned}$$
for \(i=1,\dots, \lfloor\frac{M-2d}{2}\rfloor\), and
$$ \tilde{\theta}_{i,m}:=E\bigg[\prod_{k=1}^{d}|\theta_{2i+2k-2, m}|^{r_{k}}\bigg|{\mathcal {F}}_{s_{m}}\bigg],\qquad\hat{\theta}_{k,m}:=\frac {1}{\gamma_{2i,d}}\bigg(\prod_{k=1}^{d}|\theta_{2i+2k-2, m}|^{r_{k}}-\tilde {\theta}_{i,m}\bigg), $$
we obtain
$$\begin{aligned} & 4 \Delta^{2} E\bigg[\bigg(\sum_{\ell=a_{i}(p)}^{b_{i}(p)-1}\bar{\theta }_{\ell}\bigg)^{2}\bigg|{\mathcal {F}}_{s_{a_{i}(p)}}\bigg] \\ &= 4\Delta^{2} \sum_{\ell=a_{i}(p)}^{b_{i}(p)-1}E[\bar{\theta}_{\ell }^{2}|{\mathcal {F}}_{s_{a_{i}(p)}}]+ 4\Delta^{2} \sum_{\ell\neq r, \ell ,r=a_{i}(p)}^{b_{i}(p)-1}E[\bar{\theta}_{\ell}\bar{\theta}_{r}|{\mathcal {F}}_{s_{a_{i}(p)}}] \\ &=4\Bigg(2dp\bigg(\prod_{k=1}^{d}\frac{m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg)+2\sum _{k=1}^{d-1}(2dp-2k)\bigg(\prod_{j=1}^{d-k}\frac {m_{r_{j}+r_{k+j}}}{m_{r_{j}}m_{r_{k+j}}}-1\bigg)\Bigg) \sigma^{4}_{s_{a_{i}(p)}}\Delta^{2} \end{aligned}$$
since we do not include the terms of odd \(|\ell-r|\). Therefore,
$$\begin{aligned} M\sum_{i=0}^{i_{M}(p)}E[\varsigma^{2}_{i}(p,1)|{\mathcal {F}}_{s_{a_{i}(p)}}]\xrightarrow{\,\,P\,\,}\int_{0}^{t}\gamma(p)_{s}^{2}ds, \end{aligned}$$
where
$$\begin{aligned} \gamma(p)_{s}^{2} =&\frac{4}{4d(p+1)}\bigg(2dp\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}} +2\sum_{k=1}^{d-1}(2dp-2k)\prod_{j=1}^{d-k} \frac{m_{r_{j}+r_{k+j}}}{m_{r_{j}}m_{r_{k+j}}} \\ &\phantom{\frac{4}{4d(p+1)}\Big(}+2d(p+2d-2dp-2)\bigg)\sigma_{s}^{2}. \end{aligned}$$
We observe that
$$ \lim_{p\rightarrow\infty}\gamma(p)_{s}^{2}=\bigg(2\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}}+4\sum_{k=1}^{d-1}\prod_{j=1}^{d-k}\frac {m_{r_{j}+r_{k+j}}}{m_{r_{j}}m_{r_{k+j}}}+2-4d\bigg) \sigma_{s}^{2}. $$
This completes the proof of Theorem 3.5. □
Proof of Theorem 4.1
In view of Theorem 3.1, letting
$$ Y(t)=\int_{0}^{t}b_{s}ds+\int_{0}^{t}\sigma_{s}dW_{s} \quad \text{and}\quad Z(t)=X(t)-Y(t), $$
we need to prove that
$$\begin{aligned} \sum_{i=0}^{M-2d}\frac{(\prod_{k=1}^{d}|\bar{X}_{s_{i+2k}}-\bar {X}_{s_{i+2k-1}}|^{r_{k}}-\prod_{k=1}^{d}|\bar{Y}_{s_{i+2k}}-\bar {Y}_{s_{i+2k-1}}|^{r_{k}})}{\gamma_{i,d}}\Delta \xrightarrow{\,\,P\,\,} 0. \end{aligned}$$
For convenience, we denote
$$ a_{i,k}:=\frac{\bar{Y}_{s_{i+2k}}-\bar{Y}_{s_{i+2k-1}}}{(\alpha _{i+2k-1}^{2}\Delta+(\alpha'_{i+2k})^{2}\Delta)^{1/2}},\qquad b_{i,k}:=\frac {\bar{Z}_{s_{i+2k}}-\bar{Z}_{s_{i+2k-1}}}{(\alpha_{i+2k-1}^{2}\Delta+(\alpha'_{i+2k})^{2}\Delta)^{1/2}}. $$
Then, it suffices to show
$$\begin{aligned} \sum_{i=0}^{M-2d}\bigg(\prod_{k=1}^{d}|a_{i,k}+b_{i,k}|^{r_{k}}-\prod _{k=1}^{d}|a_{i,k}|^{r_{k}}\bigg)\Delta_{i}\xrightarrow{\,\,P\,\,} 0. \end{aligned}$$
Note that
$$\begin{aligned} \bigg|\prod_{k=1}^{d}|a_{i,k}+b_{i,k}|^{r_{k}}-\prod _{k=1}^{d}|a_{i,k}|^{r_{k}}\bigg| \leq&\sum_{k=1}^{d}|b_{i,k}|^{r_{k}}\prod_{j\neq k}|a_{i,j}|^{r_{j}}\\ &{}+\sum_{k=1}^{d}\prod_{j< k}|a_{i,j}|^{r_{j}}\prod_{j\geq k}|b_{i,j}|^{r_{j}}. \end{aligned}$$
Hence, it suffices to show that
$$\begin{aligned} \sum_{i=0}^{M-2d}(|b_{i,k}|^{r_{k}}\prod_{j\neq k}|a_{i,j}|^{r_{j}})\Delta \xrightarrow{\,\,P\,\,}& 0, \qquad k=1,2,\dots,d,\\ \sum_{i=0}^{M-2d}(\prod_{j< k}|a_{i,j}|^{r_{j}}\prod_{j\geq k}|b_{i,j}|^{r_{j}})\Delta \xrightarrow{\,\,P\,\,}& 0, \qquad k=1,2,\dots,d-1. \end{aligned}$$
The proofs of the first \(d-1\) convergences are similar; here, we show the case \(k=1\). We use the same technique as in [37] and split the jump process \(Z\) into two parts by a threshold \(\epsilon>0\). The first part contains the “big” jumps whose absolute values are bigger than \(\epsilon\). The second part is the rest, that is,
$$\begin{aligned} Z_{1t}^{\epsilon}=\sum_{0< s\leq t, |J(Z_{s})|>\epsilon} J(Z_{s}), \end{aligned}$$
where \(J(Z_{s})=Z_{s}-Z_{s-}\), and thus \(Z_{2t}^{\epsilon}=Z_{t}-Z_{1t}^{\epsilon}\). We define the indicator \(I_{i}(\epsilon)\) of the set \(\{|J(Z_{s})|\leq \epsilon\}\) for all \(s\in(s_{i}, s_{i+2}]\). We then use the generalized Hölder inequality with \(\frac{1}{p_{1}}+\frac {1}{p_{2}}+\cdots+\frac{1}{p_{d}}=1\) and \(\frac{1}{q_{1}}+\frac{1}{q_{2}}+\cdots +\frac{1}{q_{d}}=1\) to obtain
$$\begin{aligned} &\sum_{i=0}^{M-2d}\bigg(|b_{i,1}|^{r_{1}}\prod_{j=2}^{d}|a_{i,j}|^{r_{j}}\bigg)\Delta \\ &=\sum_{i=0}^{M-2d}\bigg(|b_{i,1}|^{r_{1}}\prod _{j=2}^{d}|a_{i,j}|^{r_{j}}\bigg)I_{i}(\epsilon)\Delta+\sum_{i=0}^{M-2d}\bigg(|b_{i,1}|^{r_{1}}\prod_{j=2}^{d}|a_{i,j}|^{r_{j}}\bigg)\big(1-I_{i}(\epsilon )\big)\Delta \\ &\leq\bigg(\sum_{i=0}^{M-2d}|b_{i,1}|^{r_{1}p_{1}}I_{i}(\epsilon)\Delta^{\frac {r_{1}p_{1}}{2}}\bigg)^{\frac{1}{p_{1}}}\prod_{j=2}^{d}\bigg(\sum _{i=0}^{M-2d}|a_{i,j}|^{r_{j}p_{j}} \Delta_{i}^{\frac{r_{j}p_{j}}{2}}\bigg)^{\frac{1}{p_{j}}} \\ &\phantom{=:}+\bigg(\sum_{i=0}^{M-2d}|b_{i,1}|^{r_{1}q_{1}}\big(1-I_{i}(\epsilon)\big)\Delta^{\frac{r_{1}q_{1}}{2}}\bigg)^{\frac {1}{q_{1}}}\prod_{j=2}^{d}\bigg(\sum_{i=0}^{M-2d}|a_{i,j}|^{r_{j}q_{j}} \Delta^{\frac{r_{j}q_{j}}{2}}\bigg)^{\frac{1}{q_{j}}}. \end{aligned}$$
If we take \(p_{j}=q_{j}=\frac{2}{r_{j}}\), then by using a method similar to that in Theorem 3.1 or the result of Theorem 1 in [29] we obtain
$$ \sum_{i=0}^{M-2d}|a_{i,j}|^{r_{j}q_{j}}\Delta^{\frac{r_{j}q_{j}}{2}}\xrightarrow {\,\,P\,\,}\int_{0}^{t}\sigma_{s}^{2}ds $$
for all \(j=1,2,\dots, d\). Now, we consider the term \(\sum _{i=0}^{M-2d}|b_{i,1}|^{r_{1}p_{1}}I_{i}(\epsilon)\Delta^{\frac{r_{1}p_{1}}{2}}\). For sufficiently large \(M\), we have \(\Delta\leq2\epsilon\), and hence \(\max_{j}(t_{j+1}-t_{j})<2\epsilon\). Further, if \(I_{j}(\epsilon)=1\), then \(Z_{t}=Z_{2t}^{\epsilon}\), and from [37] and [23] we have \(\sup_{j}|Z^{\epsilon}_{2t_{j}}-Z^{\epsilon}_{2t_{j-1}}|<2\epsilon\); thus, \(|\Delta^{\frac{1}{2}}b_{i,1}|^{r_{1}p_{1}}I_{i}(\epsilon)\leq K\epsilon\). Now, for a fixed \(\eta\) such that \(\eta>4\epsilon\), let \(c\in(\beta, p_{1}r_{1}]\). Then we obtain
$$\begin{aligned} &\limsup_{M\rightarrow\infty}\sum _{i=0}^{M-2d}|b_{i,1}|^{r_{1}p_{1}}I_{i}(\epsilon)\Delta^{\frac {r_{1}p_{1}}{2}} \\ &\leq\limsup_{M\rightarrow\infty}(K\epsilon)^{r_{1}p_{1}-c}\sum _{i=0}^{M-2d}|b_{i,1}|^{c}I_{i}(\epsilon) \\ &\leq\limsup_{M\rightarrow\infty}(K\epsilon)^{r_{1}p_{1}-c}\sum _{i=1}^{n}|Z^{\epsilon}_{2t_{i}}-Z^{\epsilon}_{2t_{j-1}}|^{c} \\ &\leq(K\epsilon)^{r_{1}p_{1}-c}\limsup_{M\rightarrow\infty}\bigg(\sum _{0< s\leq t, |J(Z_{s})|\leq\eta}|J(Z_{s})|+\sum_{i=1}^{n}|Z^{\eta}_{2t_{i}}-Z^{\eta}_{2t_{j-1}}|^{c}\bigg), \end{aligned}$$
where \(K\) is a constant independent of \(\epsilon\) and \(M\), but depending on \(\max{L_{i}}\), \(\eta\), \(d\), and \(c\). Because \(Z\) is a pure jump process of finite variation, we see that both sums are finite. Furthermore, since we choose \(c\in(\beta,r_{1}p_{1}]\) and thus \(0\leq r_{1}p_{1}-c< r_{1}p_{1}-\beta\), we obtain \(r_{1}p_{1}-c>0\); hence, \(\sum _{i=0}^{M-2d}|b_{i,1}|^{r_{1}p_{1}}I_{i}(\epsilon)\Delta^{\frac {r_{1}p_{1}}{2}}\xrightarrow{P}0\) by letting \(\epsilon\rightarrow0\). By these choices we need \(0< r_{1}p_{1}-\beta\), and thus \(\beta< r_{1}p_{1}=2\) since we let \(p_{j}=\frac{2}{r_{j}}\) in the above, which clearly is not a restriction.
Since we have finitely many big jumps whose absolute value is larger than \(\epsilon\), recalling that \(|\Delta^{\frac{1}{2}}b_{i,1}|^{r_{1}p_{1}}I_{i}(\epsilon)\leq K\epsilon\) for small enough \(\epsilon\), we obtain
$$ \lim_{\epsilon\rightarrow0}\lim_{M\rightarrow\infty}\sum _{i=0}^{M-2d}(|b_{i,1}|^{r_{1}q_{1}}\big(1-I_{i}(\epsilon)\big)\Delta^{\frac {r_{1}q_{1}}{2}}=0. $$
Thus, we have proved the first \(d\) convergences.
We now prove the last \(d-1\) convergences. As the proofs are similar, here we only prove the case \(k=d-1\). The generalized Hölder inequality yields
$$\begin{aligned} &\sum_{i=0}^{M-2d}\bigg(|b_{i,d}|^{r_{d}}\Delta^{\frac {r_{d}}{2}}|b_{i,d-1}|^{r_{d-1}}\Delta^{\frac{r_{d-1}}{2}}\prod _{j=1}^{d-2}|a_{i,j}|^{r_{j}}\bigg)\Delta\\ &=\sum_{i=0}^{M-2d}\bigg(|b_{i,d}|^{r_{d}}\Delta^{\frac {r_{d}}{2}}|b_{i,d-1}|^{r_{d-1}}\Delta^{\frac{r_{d-1}}{2}}\prod _{j=1}^{d-2}|a_{i,j}|^{r_{j}}\Delta^{\frac{r_{j}}{2}}\bigg)I_{i+2d-4} (\epsilon)I_{i+2d-2}(\epsilon)\\ &\phantom{=:}+\sum_{i=0}^{M-2d}\bigg(|b_{i,d}|^{r_{d}}\Delta^{\frac {r_{d}}{2}}|b_{i,d-1}|^{r_{d-1}}\Delta^{\frac{r_{d-1}}{2}}\prod _{j=1}^{d-2}|a_{i,j}|^{r_{j}}\Delta^{\frac{r_{j}}{2}}\bigg)A_{i}(\epsilon)\\ &\phantom{=:}+\sum_{i=0}^{M-2d}\bigg(|b_{i,d}|^{r_{d}}\Delta^{\frac {r_{d}}{2}}|b_{i,d-1}|^{r_{d-1}}\Delta^{\frac{r_{d-1}}{2}}\prod _{j=1}^{d-2}|a_{i,j}|^{r_{j}}\Delta^{\frac{r_{j}}{2}}\bigg)A_{i}'(\epsilon)\\ &\phantom{=:}+\sum_{i=0}^{M-2d}\bigg(|b_{i,d}|^{r_{d}}\Delta^{\frac {r_{d}}{2}}|b_{i,d-1}|^{r_{d-1}}\Delta^{\frac{r_{d-1}}{2}}\prod _{j=1}^{d-2}|a_{i,j}|^{r_{j}}\Delta^{\frac{r_{j}}{2}}\bigg)A''_{i}(\epsilon), \end{aligned}$$
where
$$\begin{aligned} A_{i}(\epsilon)&=I_{i+2d-4}(\epsilon)\big(1-I_{i+2d-2}(\epsilon)\big),\\ A'_{i}(\epsilon)&= \big(1-I_{i+2d-4}(\epsilon)\big)I_{i+2d-2}(\epsilon ),\\ A''_{i}(\epsilon)&=\big(1-I_{i+2d-4}(\epsilon)\big)\big(1-I_{i+2d-2}(\epsilon)\big). \end{aligned}$$
For the first term, we obtain
$$\begin{aligned} &\sum_{i=0}^{M-2d}\bigg(|b_{i,d}|^{r_{d}}\Delta^{\frac {r_{d}}{2}}|b_{i,d-1}|^{r_{d-1}}\Delta^{\frac{r_{d-1}}{2}}\prod _{j=1}^{d-2}|a_{i,j}|^{r_{j}}\Delta^{\frac{r_{j}}{2}}\bigg)I_{i+2d-4}(\epsilon) I_{i+2d-2}(\epsilon)\\ &\leq\bigg(\sum_{i=0}^{M-2d}|b_{i,d}|^{a_{d}r_{d}}I_{i+2d-2}(\epsilon)\Delta ^{\frac{a_{d}r_{d}}{2}}\bigg)^{\frac{1}{a_{d}}}\\ &\phantom{=:}\times\bigg(\sum _{i=0}^{M-2d}|b_{i,d-1}|^{a_{d-1}r_{d-1}}I_{i+2d-4}(\epsilon) \Delta^{\frac{a_{d-1}r_{d-1}}{2}}\bigg)^{\frac{1}{a_{d-1}}}\\ &\phantom{=:}\times\prod_{j=1}^{d-2}\bigg(\sum _{i=0}^{M-2d}|a_{i,j}|^{r_{j}a_{j}}\Delta^{\frac{r_{j}a_{j}}{2}}\bigg)^{\frac{1}{a_{j}}}, \end{aligned}$$
which tends to zero as \(M\rightarrow\infty\) and \(\epsilon\rightarrow 0\). We can prove similarly that the second term and the third term also tend to zero as \(M\rightarrow\infty\). For the last term, by the same argument as in [12], for \(M\) large enough, there are no contiguous jumps because the term contains only finitely many jumps (the large jumps). Hence, for a small enough \(\epsilon\), the last term is equal to zero. This yields the desired result. □
Proof of Theorem 4.2
In view of the proof of Theorem 4.1, it suffices to show that
$$\begin{aligned} \sqrt{\Delta} \sum_{i=0}^{M-2d}\bigg(|b_{i,k}|^{r_{k}}\prod_{j\neq k}|a_{i,j}|^{r_{j}}\bigg)\Delta \xrightarrow{\,\,P\,\,}& 0, \qquad k=1,2,\dots,d,\\ \sqrt{\Delta} \sum_{i=0}^{M-2d}\bigg(\prod_{j< k}|a_{i,j}|^{r_{j}}\prod _{j\geq k}|b_{i,j}|^{r_{j}}\bigg)\Delta \xrightarrow{\,\,P\,\,}& 0, \qquad k=1,2,\dots,d-1. \end{aligned}$$
Here, we show the case \(k=1\); the other cases can be proved similarly. By the generalized Hölder inequality, we have
$$\begin{aligned} &\Delta^{\frac{1}{2}}\sum_{i=0}^{M-2d}\bigg(|b_{i,1}|^{r_{1}}\prod _{j=2}^{d}|a_{i,j}|^{r_{j}}\bigg) \\ &=\Delta^{\frac{1}{2}}\sum_{i=0}^{M-2d}\bigg(|b_{i,1}|^{r_{1}}\prod _{j=2}^{d}|a_{i,j}|^{r_{j}}\bigg)I_{i}(\epsilon)+\Delta^{\frac{1}{2}}\sum _{i=0}^{M-2d}\bigg(|b_{i,1}|^{r_{1}}\prod_{j=2}^{d}|a_{i,j}|^{r_{j}}\bigg)\big(1-I_{i}(\epsilon)\big) \\ &\leq\bigg(\sum_{i=0}^{M-2d}|b_{i,1}|^{r_{1}p_{1}}I_{i}(\epsilon)\Delta ^{1-\frac{p_{1}}{2}}\bigg)^{\frac{1}{p_{1}}}\prod_{j=2}^{d}\bigg(\sum _{i=0}^{M-2d}|a_{i,j}|^{r_{j}p_{j}}\Delta\bigg)^{\frac{1}{p_{j}}} \\ &\phantom{=:}+\bigg(\sum_{i=0}^{M-2d}|b_{i,1}|^{r_{1}q_{1}}\big(1-I_{i}(\epsilon)\big)\Delta^{1-\frac{q_{1}}{2}}\bigg)^{\frac{1}{q_{1}}}\prod _{j=2}^{d}\bigg(\sum_{i=0}^{M-2d}|a_{i,j}|^{r_{j}q_{j}}\Delta\bigg)^{\frac{1}{q_{j}}}. \end{aligned}$$
Now, to ensure that the last terms tend to zero, we require the inequalities
$$\begin{aligned} 1-\frac{q_{1}}{2}-\frac{q_{1}r_{1}}{2}>0, \qquad1-\frac{p_{1}}{2}-\frac {p_{1}r_{1}}{2}+r_{1}p_{1}-\beta>0, \end{aligned}$$
where \(r_{1}<1\) since \(q_{1}>1\) and \(\beta<1\). This, together with Theorem 3.5, yields the required result. □
Proof of Theorem 5.2
We first prove the theorem for the particular case \(L_{i}\equiv L\) and then extend it to the i.i.d. case. For any process \(Z\), let
$$\begin{aligned} \overline{\xi}_{i}(Z) =&\sum_{j=1}^{k_{M}-1}g\bigg(\frac{j}{k_{M}}\bigg)\xi _{i+j}(Z),\qquad\overline{\xi}'_{i}(Z)=\sum_{j=1}^{k_{M}-1}g\bigg(\frac {j}{k_{M}}\bigg)\xi'_{i+j}(Z),\\ \overline{\kappa}_{i,\ell}(Z) =&\sum_{j=1}^{k_{M}-1}g\bigg(\frac {j}{k_{M}}\bigg)\kappa_{i,\ell+j}(Z),\qquad\overline{\kappa}'_{i,\ell }(Z)=\sum_{j=1}^{k_{M}-1}g\bigg(\frac{j}{k_{M}}\bigg)\kappa'_{i,\ell+j}(Z). \end{aligned}$$
Further, let \(\overline{\overline{\xi}}_{i}(Z)=\sqrt{k_{M}}(\overline{\xi }_{i}(Z)+\overline{\xi}'_{i+1}(Z))\), \(\overline{\overline{\kappa }}_{i,\ell}(Z)=\sqrt{k_{M}}(\overline{\kappa}_{i,\ell}(Z)+\overline{\kappa }'_{i,\ell+1}(Z))\). We show the results for a continuous process. We can prove the robustness to the presence of jumps as in [33] or follow the procedure for the previous theorem, which is based on preaveraged increments. Let \(Y=X^{c}+\epsilon\), where \(X^{c}\) is the continuous part of \(X\). Since the drift term does not affect the asymptotic behavior, we assume that \(X\) does not contain a drift term. Then we have \(\Delta_{i,k_{M}}\overline{Y}=\frac{1}{\sqrt{k_{M}}}\overline {\overline{\xi}}_{i}(Y)\). Thus,
$$\begin{aligned} &\frac{1}{k_{M}}\sum_{i=0}^{M-dk_{M}}\bigg(\prod_{j=1}^{d}|\Delta _{i+(j-1)k_{M},k_{M}}\overline{Y}|^{r_{j}}\bigg)\\ &=\prod_{j=1}^{d}m_{r_{j}}\bigg(\int_{0}^{t}\bar{g}(2)\sigma_{s}^{2}ds +\frac{\overline{g'}(2)\omega^{2}}{L}t\bigg)+\sum_{j=1}^{3}I_{j,M}, \end{aligned}$$
where
$$\begin{aligned} I_{1,M} =&\frac{\Delta}{\theta^{2}}\sum_{i=0}^{M-dk_{M}}E\bigg[\prod _{j=1}^{d}|\overline{\overline{\kappa}}_{i,i+(j-1)k_{M}}(Y)|^{r_{j}}\bigg|{\mathcal {F}}_{s_{i}}\bigg]\\ &-\prod_{j=1}^{d}m_{r_{j}}\bigg(\int_{0}^{t}\bar{g}(2)\sigma_{s}^{2}ds+\frac {\overline{g'}(2)\omega^{2}}{L}t\bigg),\\ I_{2,M} =&\frac{\Delta}{\theta^{2}}\sum_{i=0}^{M-dk_{M}}\bigg(\prod _{j=1}^{d}|\overline{\overline{\kappa}}_{i,i+(j-1)k_{M}}(Y)|^{r_{j}}-E\bigg[\prod_{j=1}^{d}|\overline{\overline{\kappa}}_{i,i+(j-1)k_{M}}(Y)|^{r_{j}} \bigg|{\mathcal {F}}_{s_{i}}\bigg]\bigg)\\ I_{3,M} =&\frac{\Delta}{\theta^{2}}\sum_{i=0}^{M-dk_{M}}\bigg(\prod _{j=1}^{d}|\overline{\overline{\xi}}_{i+(j-1)k_{M}}(Y)|^{r_{j}}-\prod _{j=1}^{d}|\overline{\overline{\kappa}}_{i,i+(j-1)k_{M}}(Y)|^{r_{j}}\bigg). \end{aligned}$$
The convergences \(I_{3,M}\xrightarrow{P}0\) and \(I_{2,M}\xrightarrow {P}0\) follow from the same procedure used for the proofs of Lemma A.2 and Theorem 3.1. To show that \(I_{1,M}\xrightarrow{P}0\), we consider
$$ E\bigg[\prod_{j=1}^{d}|\overline{\overline{\kappa}}_{i,\ell }(Y)|^{r_{j}}\bigg|{\mathcal {F}}_{s_{i}}\bigg]=\prod_{j=1}^{d}E\bigg[|\sigma _{s_{i}}\overline{\overline{\kappa}}_{i,\ell}(W)+\overline{\overline {\kappa}}_{\ell}(\epsilon)|^{r_{j}}\bigg|{\mathcal {F}}_{s_{i}}\bigg], $$
and by some simple computations we have
$$\begin{aligned} &\sqrt{k_{M}}\overline{\overline{\kappa}}_{i,\ell}(W)\\ &=\sqrt{k_{M}}\big(\overline{\kappa}_{i,\ell}(W)+\overline{\kappa }'_{i,\ell+1}(W)\big)\\ &=\sqrt{k_{M}}\bigg(\sum_{j=1}^{k_{M}}g\Big(\frac{j}{k_{M}}\Big) \sum _{k=1}^{L}\frac{k-1}{L}\Delta_{\ell+j,k}W\\ &\phantom{=:}+\sum_{j=1}^{k_{M}-1}g\Big(\frac{j}{k_{M}}\Big) \sum_{k=1}^{L}\Big(1-\frac{k-1}{L}\Big)\Delta_{\ell+j+1,k}W\bigg)\\ &=\sqrt{k_{M}}g\Big(\frac{1}{k_{M}}\Big) \sum_{k=1}^{L}\frac{k-1}{L}\Delta _{\ell+1,k}W\\ &\phantom{=:}+\sqrt{k_{M}}\sum_{j=1}^{k_{M}-1}\sum_{k=1}^{L}\bigg(g\Big(\frac {j+1}{k_{M}}\Big) \frac{k-1}{L}+g\Big(\frac{j}{k_{M}}\Big) \Big(1-\frac {k-1}{L}\Big)\bigg)\Delta_{\ell+j+1,k}W\\ &=\sqrt{k_{M}}\sum_{j=1}^{k_{M}-1}\sum_{k=1}^{L}\bigg(g\Big(\frac {j+1}{k_{M}}\Big) \frac{k-1}{L}+g\Big(\frac{j}{k_{M}}\Big) \Big(1-\frac {k-1}{L}\Big)\bigg)\Delta_{\ell+j+1,k}W\\ &\phantom{=:}+o_{p}(1), \end{aligned}$$
where \(o_{p}(1)\) denotes a random variable that tends to 0 as \(M\to \infty\). By denoting the last term (without \(o_{p}(1)\)) as \(\chi_{\ell}^{M}\), we have \(\chi_{\ell}^{M}\overset{d}{=}A^{M}{N}_{1}\), where \({N}_{1}\) is a standard normal random variable and
$$\begin{aligned} (A^{M})^{2}&:=\frac{1}{L}\sum_{j=1}^{k_{M}-1}\sum_{k=1}^{L}\bigg(g\Big(\frac{j+1}{k_{M}}\Big) \frac{k-1}{L}+g\Big(\frac{j}{k_{M}}\Big) \Big(1-\frac{k-1}{L}\Big)\bigg)^{2} (k_{M}\Delta)\\ &\phantom{:}=\frac{\theta^{2}}{Lk_{M}}\sum_{j=1}^{k_{M}-1}\sum_{k=1}^{L}\bigg(g\Big(\frac{j+1}{k_{M}}\Big) \frac{k-1}{L}+g\Big(\frac{j}{k_{M}}\Big) \Big(1-\frac{k-1}{L}\Big)\bigg)^{2}\\ &\longrightarrow\theta^{2}\bar{g}(2) \qquad\mbox{as } k_{M}\rightarrow \infty. \end{aligned}$$
For \(\overline{\overline{\xi}}_{i}(\epsilon)\), we have
$$\begin{aligned} &\sqrt{k_{M}}\overline{\overline{\xi}}_{\ell}(\epsilon)\\ &=\sqrt{k_{M}}\sum_{j=1}^{k_{M}-1}g\Big(\frac{j}{k_{M}}\Big)(\overline {\epsilon}_{\ell+j+1}-\overline{\epsilon}_{\ell+j})\\ &=\sqrt{k_{M}}\sum_{j=1}^{k_{M}-1}g\Big(\frac{j}{k_{M}}\Big) \frac{1}{L}\sum _{k=1}^{L}\epsilon_{N_{\ell+j}+k}-\sqrt{k_{M}}\sum_{j=1}^{k_{M}-1}g\Big(\frac {j}{k_{M}}\Big) \frac{1}{L}\sum_{k=1}^{L}\epsilon_{N_{\ell+j-1}+k}\\ &=-g\Big(\frac{1}{k_{M}}\Big) \frac{\sqrt{k_{M}}}{L}\sum_{k=1}^{L}\epsilon _{N_{\ell}+k}+\frac{\sqrt{k_{M}}}{L}\sum_{j=1}^{k_{M}-1}\sum_{k=1}^{L}\bigg(g\Big(\frac{j}{k_{M}}\Big)-g\Big(\frac{j+1}{k_{M}}\Big)\bigg) \epsilon_{N_{\ell+j}+k}\\ &=o_{p}(1)+\frac{\sqrt{k_{M}}}{L}\sum_{j=1}^{k_{M}-1}\sum_{k=1}^{L}\bigg(g\Big(\frac{j}{k_{M}}\Big)-g\Big(\frac{j+1}{k_{M}}\Big)\bigg)\epsilon_{N_{\ell+j}+k}. \end{aligned}$$
Similarly, by denoting the last term as \(\vartheta_{\ell}^{M}\), we have \(\vartheta_{\ell}^{M}\overset{d}{\rightarrow} B{N}_{2}\) by the Lindeberg–Feller central limit theorem, where \({N}_{2}\) is a standard normal random variable and
$$\begin{aligned} B^{2}:=\lim_{k_{M}\rightarrow\infty}\frac{k_{M}}{L^{2}}\sum_{j=1}^{k_{M}-1}\sum _{k=1}^{L}\bigg(g\Big(\frac{j}{k_{M}}\Big)-g\Big(\frac{j+1}{k_{M}}\Big)\bigg)^{2}\omega^{2}=\frac{\overline{g'}(2)}{L}\omega^{2}. \end{aligned}$$
Following [33] (Lemma 2 and the first paragraph of Theorem 1), we obtain
$$\begin{aligned} E\big[|\sigma_{s_{i}}\overline{\overline{\kappa}}_{i,\ell}(W)+\overline {\overline{\kappa}}_{\ell}(\epsilon)|^{r_{j}}\big|{\mathcal {F}}_{s_{i}}\big]= m_{r_{j}}\bigg(\theta^{2}\bar{g}(2)\sigma^{2}_{s_{i}}+\frac{\overline {g'}(2)}{L}\omega^{2}\bigg)^{\frac{r_{j}}{2}}+o_{p}(1) \end{aligned}$$
uniformly in \(i\). Thus, we obtain
$$\begin{aligned} &\frac{\Delta}{\theta^{2}}\sum_{i=0}^{M-dk_{M}}E\bigg[\prod _{j=1}^{d}|\overline{\overline{\kappa}}_{i,i+(j-1)k_{M}}(Y)|^{r_{j}}\bigg|{\mathcal {F}}_{s_{i}}\bigg]\\ &=\frac{\Delta}{\theta^{2}}\sum_{i=0}^{M-dk_{M}}\prod_{j=1}^{d}E\big[|\sigma _{s_{i}}\overline{\overline{\kappa}}_{i,i+(j-1)k_{M}}(W)+\overline{\overline {\kappa}}_{i+(j-1)k_{M}}(\epsilon)|^{r_{j}}\big|{\mathcal {F}}_{s_{i}}\big]\\ &=\frac{\Delta}{\theta^{2}}\sum_{i=0}^{M-dk_{M}}\prod_{j=1}^{d}m_{r_{j}}\bigg(\theta^{2}\bar{g}(2)\sigma^{2}_{s_{i}}+\frac{\overline{g'}(2)}{L}\omega ^{2}\bigg)^{\frac{r_{j}}{2}}+o_{p}(1)\\ &=\frac{\Delta}{\theta^{2}}\sum_{i=0}^{M-dk_{M}}\bigg(\prod _{j=1}^{d}m_{r_{j}}\bigg)\bigg(\theta^{2}\bar{g}(2)\sigma^{2}_{s_{i}}+\frac {\overline{g'}(2)}{L}\omega^{2}\bigg)+o_{p}(1)\\ &\longrightarrow\bigg(\prod_{j=1}^{d}m_{r_{j}}\bigg)\bigg(\bar{g}(2)\int _{0}^{t}\sigma^{2}_{s}ds+\frac{\overline{g'}(2)}{\theta^{2} L}\omega^{2}t\bigg). \end{aligned}$$
If \((L_{i})_{i \in{\mathbb {N}}}\) is an i.i.d. random sequence taking positive integer values with \(E[\frac{1}{L_{i}}]=\lambda\), then we similarly have \((A^{M})^{2}\rightarrow\theta^{2}\bar{g}(2)\) as \(k_{M}\rightarrow\infty\), and
$$\begin{aligned} \sqrt{k_{M}}\overline{\overline{\xi}}_{\ell}(\epsilon)&=o_{p}(1)+\sqrt {k_{M}}\sum_{j=1}^{k_{M}-1}\bigg(g\Big(\frac{j}{k_{M}}\Big)-g\Big(\frac {j+1}{k_{M}}\Big)\bigg) \bigg(\frac{1}{L_{\ell+j}}\sum_{k=1}^{L_{\ell+j}} \epsilon_{N_{\ell+j}+k}\bigg)\\ &\overset{d}{\longrightarrow} \overline{g'}(2)\lambda\omega^{2}{\mathcal {N}}_{2}. \end{aligned}$$
The result follows. □
Proof of Proposition 5.3
Note that
$$\begin{aligned} \widetilde{\omega^{2}}&=\frac{1}{2M}\bigg(\sum_{i=1}^{M}(\bar{X}_{s_{i}}-\bar {X}_{s_{i-1}})^{2}+\sum_{i=1}^{M}(\bar{\epsilon}_{s_{i}}-\bar{\epsilon }_{s_{i-1}})^{2}\\ &\phantom{=:}+\sum_{i=1}^{M}(\bar{X}_{s_{i}}-\bar{X}_{s_{i-1}})(\bar {\epsilon}_{s^{n}_{i}}-\bar{\epsilon}_{s^{n}_{i-1}})\bigg). \end{aligned}$$
First, we have
$$\begin{aligned} \frac{1}{2M}\sum_{i=1}^{M}(\bar{X}_{s_{i}}-\bar{X}_{s_{i-1}})^{2} \leq&\frac {2}{2M}\sum_{i=1}^{M}(\bar{X}^{c}_{s_{i}}-\bar{X}^{c}_{s_{i-1}})^{2}+\frac {2}{2M}\sum_{i=1}^{M}(\bar{X}^{J_{1}}_{s_{i}}-\bar{X}^{J_{1}}_{s_{i-1}})^{2}\\ &+\frac{2}{2M}\sum_{i=1}^{M}(\bar{X}^{J_{2}}_{s_{i}}-\bar{X}^{J_{2}}_{s_{i-1}})^{2}, \end{aligned}$$
where \(dX^{c}_{t}=b_{t}dt+\sigma_{t}dW_{t}\) is the continuous part, the jump martingale part is \(dX^{J_{1}}=\int_{R}h(x)(\mu-\nu)(dx,dt)\), and \(dX^{J_{2}}=\int_{R}(x-h(x))\mu(dx,dt)\) is the big jump part. The estimates \(E[(\bar{X}^{c}_{s_{i}}-\bar{X}^{c}_{s_{i-1}})^{2}]\leq K\Delta\) and \(E[(\bar{X}^{J_{1}}_{s_{i}}-\bar{X}^{J_{s}}_{s_{i-1}})^{2}]\leq K\Delta\) yield \(\frac{2}{2M}\sum_{i=1}^{M}(\bar{X}^{c}_{s_{i}}-\bar {X}^{c}_{s_{i-1}})^{2}\xrightarrow{\,\,P\,\,}0\) and \(\frac{2}{2M}\sum_{i=1}^{M}(\bar{X}^{J_{1}}_{s_{i}}-\bar {X}^{J_{1}}_{s_{i-1}})^{2}\xrightarrow{P}0\), respectively. Since \(h(x)=x\) near zero, the third summation contains only finitely many nonzero terms as \(M\rightarrow\infty\). Therefore, \(\frac{2}{2M}\sum _{i=1}^{M}(\bar{X}^{J_{2}}_{s_{i}}-\bar{X}^{J_{2}}_{s_{i-1}})^{2}\xrightarrow{P}0\).
Second, by the Cauchy–Schwarz inequality and Assumption 5.1, we obtain
$$\begin{aligned} &\frac{1}{2M}\sum_{i=1}^{M}(\bar{X}_{s_{i}}-\bar{X}_{s_{i-1}})(\bar {\epsilon}_{s_{i}}-\bar{\epsilon}_{s_{i-1}})\xrightarrow{\,\,P\,\,}0. \end{aligned}$$
Third, note that
$$\begin{aligned} \frac{1}{2M}\sum_{i=1}^{M}(\bar{\epsilon}_{s_{i}}-\bar{\epsilon }_{s_{i-1}})^{2}=\frac{1}{2M}\sum_{i=1}^{M}\bar{\epsilon}_{s_{i}}^{2} +\frac{1}{2M}\sum_{i=1}^{M}\bar{\epsilon}_{s_{i-1}}^{2}+\frac{1}{M}\sum _{i=1}^{M}\bar{\epsilon}_{s_{i-1}}\bar{\epsilon}_{s_{i}}. \end{aligned}$$
Since \(E[\bar{\epsilon}_{s_{i}}^{2}]=\lambda\omega^{2}\) and \(E[\bar{\epsilon }_{s^{n}_{i-1}}\bar{\epsilon}_{s^{n}_{i-1}}]=0\), we obtain
$$\begin{aligned} \frac{1}{2M}\sum_{i=1}^{M}(\bar{\epsilon}_{s_{i}}-\bar{\epsilon }_{s_{i-1}})^{2}\xrightarrow{\,\,P\,\,}\lambda\omega^{2} \end{aligned}$$
by the law of large numbers. □
Proof of Theorem 5.5
Let
$$\begin{aligned} \zeta_{i}:=\frac{1}{\bar{g}(2)}\bigg(\frac{\theta^{2}\prod_{j=1}^{d}|\Delta _{iK_{M}+(j-1)k_{M},k_{M}}\overline{X(\epsilon)}|^{r_{j}}}{\prod_{k=1}^{d} m_{r_{k}}}-\frac{\overline{g'}(2)\widetilde{\omega^{2}}t}{(\lfloor\frac {M}{k_{M}}\rfloor-d)}\bigg). \end{aligned}$$
By using the big-small-blocks technique, we have
$$\begin{aligned} &a_{i}(p)=i(p+d)k_{M},\qquad b_{i}(p)=a_{i}(p)+pk_{M},\\ &A_{i}(p)=\{k\in{\mathbb {N}}: a_{i}(p)\leq k< b_{i}(p)\},\\ &B_{i}(p)=\{k\in{\mathbb {N}}: b_{i}(p)\leq k< a_{i+1}(p)\},\\ &i_{M}(p)=\max\{i: b_{i}(p)\leq M-dk_{M}\}=\bigg\lfloor \frac {M-dk_{M}}{(p+d)k_{M}}\bigg\rfloor -1,\\ &j_{M}(p)=b_{i_{M}(p)}(p). \end{aligned}$$
Further, let
$$\begin{aligned} \hat{\zeta}_{i}&:=\frac{1}{\bar{g}(2)}\bigg(\frac{\theta^{2}\prod _{j=1}^{d}|\sigma_{s_{a_{i}(p)}}\Delta_{iK_{M}+(j-1)k_{M},k_{M}}\overline {W}+\Delta_{iK_{M}+(j-1)k_{M},k_{M}}\overline{\epsilon}|^{r_{j}}}{\prod_{k=1}^{d} m_{r_{k}}}\\ &\phantom{=:\frac{1}{\bar{g}(2)}\bigg(}-\frac{\overline{g'}(2)\omega ^{2}\lambda t}{(\lfloor\frac{M}{k_{M}}\rfloor-d)}\bigg) \end{aligned}$$
and \(\varsigma_{i}(p)=\sum_{l=a_{i}(p)}^{b_{i}(p)-1}(\zeta_{i}-E[\zeta _{i}|{\mathcal {F}}_{s_{a_{i}(p)}}])\). Similarly to [33], we can show that
$$\begin{aligned} \sqrt{k_{M}}\bigg(V_{M}-\sum_{j=0}^{i_{M}(p)}\varsigma_{i}(p)\bigg)\xrightarrow {\,\,P\,\,}0. \end{aligned}$$
We compute the asymptotic variance. In view of Assumption 5.4, we can obtain a similar result as that in Lemma 4 of [33]. Thus, we have
$$\begin{aligned} &E\big[(\zeta_{\ell}-E[\zeta_{\ell}|{\mathcal {F}}_{s_{a_{i}(p)}}])^{2}\big|{\mathcal {F}}_{s_{a_{i}(p)}}\big]\\ &=E\big[(\hat{\zeta}_{\ell}-E[\hat{\zeta}_{\ell}|{\mathcal {F}}_{s_{a_{i}(p)}}])^{2}\big|{\mathcal {F}}_{s_{a_{i}(p)}}\big]+o_{p}(\Delta^{\frac{3}{2}})\\ &=\frac{1}{k^{2}_{M}(\bar{g}(2))^{2}}E\bigg[\bigg(\frac{\theta^{2}\prod _{j=1}^{d}|\sigma_{s_{a_{i}(p)}}\Delta_{iK_{M}+(j-1)k_{M},k_{M}}\overline {W}+\Delta_{iK_{M}+(j-1)k_{M},k_{M}}\overline{\epsilon}|^{r_{j}}}{\prod_{k=1}^{d} m_{r_{k}}}\\ &\phantom{=:\frac{1}{k^{2}_{M}(\bar{g}(2))^{2}}E\bigg[}-\theta^{2}\big(\sigma ^{2}_{s_{a_{i}(p)}}\theta^{2}\bar{g}(2)+\overline{g'}(2)\lambda\omega^{2}\big)\bigg)^{2}\bigg|{\mathcal {F}}_{s_{a_{i}(p)}}\bigg]+o_{p}(\Delta^{\frac{3}{2}})\\ &=\frac{\theta^{4}}{k^{2}_{M}(\bar{g}(2))^{2}}\Bigg(\bigg(\frac{\prod _{j=1}^{d}m_{2r_{j}}}{\prod_{j=1}^{d} m_{r_{j}}}-1\bigg)\big(\sigma ^{2}_{s_{a_{i}(p)}}\theta^{2}\bar{g}(2)+\overline{g'}(2)\lambda\omega^{2}\big)^{2}\Bigg)+o_{p}(\Delta^{\frac{3}{2}}). \end{aligned}$$
For \(1\leq\ell-r< d\), we have
$$\begin{aligned} &E\big[(\zeta_{\ell}-E[\zeta_{\ell}|{\mathcal {F}}_{s_{a_{i}(p)}}])(\zeta _{r}-E[\zeta_{r}|{\mathcal {F}}_{s_{a_{i}(p)}}])\big|{\mathcal {F}}_{s_{a_{i}(p)}}\big]\\ &=\frac{\theta^{4}}{k^{2}_{M}(\bar{g}(2))^{2}} \Bigg(\bigg(\prod_{j=1}^{d-(\ell -r)}\frac{m_{r_{j}+r_{j+\ell-r}}}{m_{j_{k}}m_{r_{j+\ell-r}}}-1\bigg) \big(\sigma^{2}_{s_{a_{i}(p)}}\theta^{2}\bar{g}(2)+\overline{g'}(2)\lambda \omega^{2}\big)^{2}\Bigg)\\ &\phantom{=}+o_{p}(\Delta^{\frac{3}{2}}), \end{aligned}$$
and the (conditional) covariances are zero when \(\ell-r\geq d\). Thus,
$$\begin{aligned} &k_{M}\sum_{i=0}^{i_{M}(p)}E[\varsigma^{2}_{i}(p)|{\mathcal {F}}_{s_{a_{i}(p)}}]\\ &=\frac{\theta^{4}}{k_{M}(\bar{g}(2))^{2}}\sum_{i=0}^{i_{M}(p)}\Bigg(p\bigg(\prod_{k=1}^{d}\frac{m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg)+2\sum _{k=1}^{d-1}(p-k)\bigg(\prod_{j=1}^{d-k} \frac{m_{r_{j}+r_{k+j}}}{m_{r_{j}}m_{r_{k+j}}}-1\bigg)\Bigg)\\ &\phantom{=:\frac{\theta^{4}}{k_{M}(\bar{g}(2))^{2}}\sum_{i=0}^{i_{M}(p)}}\times \big(\sigma^{2}_{s_{a_{i}(p)}}\theta^{2}\bar{g}(2)+\overline{g'}(2)\lambda \omega^{2}\big)^{2}+o_{p}(\Delta^{\frac{1}{2}})\\ &\longrightarrow\frac{1}{p+d}\Bigg(p\bigg(\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}}-1\bigg)+2\sum_{k=1}^{d-1}(p-k)\bigg(\prod _{j=1}^{d-k}\frac{m_{r_{j}+r_{k+j}}}{m_{r_{j}}m_{r_{k+j}}}-1\bigg)\Bigg)\\ &\phantom{::\longrightarrow}\times\Bigg(\theta^{6}\int_{0}^{t}\sigma _{s}^{4}ds+2\theta^{4}\frac{\overline{g'}(2)\lambda\omega^{2}}{\bar{g}(2)}\int _{0}^{t}\sigma_{s}^{2}ds+\bigg(\frac{\theta\lambda\omega^{2}\overline{g'}(2)}{\bar {g}(2)}\bigg)^{2}t\Bigg). \end{aligned}$$
Denoting the limit as \(\gamma(p)\), we observe that
$$\begin{aligned} \lim_{p\rightarrow\infty}\gamma(p)&=\Bigg(\prod_{k=1}^{d}\frac {m_{2r_{k}}}{m_{r_{k}}^{2}}+2\sum_{k=1}^{d-1}\bigg(\prod _{j=1}^{d-k}\frac{m_{r_{j}+r_{k+j}}}{m_{r_{j}}m_{r_{k+j}}}\bigg)-2d+1\Bigg) \\ &\phantom{=:}\times\Bigg(\theta^{6}\int_{0}^{t}\sigma_{s}^{4}ds+2\theta^{4}\frac {\overline{g'}(2)\lambda\omega^{2}}{\bar{g}(2)}\int_{0}^{t}\sigma_{s}^{2}ds+\bigg(\frac{\theta\lambda\omega^{2}\overline{g'}(2)}{\bar{g}(2)}\bigg)^{2}t\Bigg). \end{aligned}$$
In view of Assumption 5.4, similarly to [33] (Lemma 8), the convergences of (A.5)–(A.7) can be shown. This completes the proof of Theorem 5.5. □
Proof of Proposition 5.7
The proof is similar to that of Theorem 5.2. □