Skip to main content
Log in

Detecting relevant differences in the covariance operators of functional time series: a sup-norm approach

  • Published:
Annals of the Institute of Statistical Mathematics Aims and scope Submit manuscript

Abstract

In this paper we propose statistical inference tools for the covariance operators of functional time series in the two sample and change point problem. In contrast to most of the literature, the focus of our approach is not testing the null hypothesis of exact equality of the covariance operators. Instead, we propose to formulate the null hypotheses in the form that “the distance between the operators is small”, where we measure deviations by the sup-norm. We provide powerful bootstrap tests for these type of hypotheses, investigate their asymptotic properties and study their finite sample properties by means of a simulation study.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Aue, A., Dubart Norinho, D., Hörmann, S. (2015). On the prediction of stationary functional time series. Journal of the American Statistical Association, 110, 378–392.

    Article  MathSciNet  Google Scholar 

  • Aue, A., Rice, G., Sönmez, O. (2018). Detecting and dating structural breaks in functional data without dimension reduction. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80(3), 509–529.

    Article  MathSciNet  Google Scholar 

  • Aue, A., Rice, G., Sönmez, O. (2020). Structural break analysis for spectrum and trace of covariance operators. Environmetrics, 31(1), e2617.

    Article  MathSciNet  Google Scholar 

  • Billingsley, P. (1968). Convergence of probability measures. New York: Wiley.

    MATH  Google Scholar 

  • Boente, G., Rodriguez, D., Sued, M. (2018). Testing equality between several populations covariance operators. Annals of the Institute of Statistical Mathematics, 70(4), 919–950.

    Article  MathSciNet  Google Scholar 

  • Bosq, D. (2000). Linear Processes in Function Spaces: Theory and Applications, Lecture Notes in Statistics. New York: Springer.

  • Bücher, A., Kojadinovic, I. (2016). A dependent multiplier bootstrap for the sequential empirical copula process under strong mixing. Bernoulli, 22(2), 927–968.

    Article  MathSciNet  Google Scholar 

  • Bücher, A., Kojadinovic, I. (2019). A note on conditional versus joint unconditional weak convergence in bootstrap consistency results. Journal of Theoretical Probability, 32, 1145–1165.

    Article  MathSciNet  Google Scholar 

  • Cabassi, A., Pigoli, D., Secchi, P., Carter, P. A. (2017). Permutation tests for the equality of covariance operators of functional data with applications to evolutionary biology. Electronic Journal of Statistics, 11(2), 3815–3840.

    Article  MathSciNet  Google Scholar 

  • Cárcamo, J., Rodríguez, L.-A., Cuevas, A. (2020). Directional differentiability for supremum-type functionals: Statistical applications. Bernoulli, 26(3), 2143–2175.

    Article  MathSciNet  Google Scholar 

  • Carey, J. R., Liedo, P., Müller, H.-G., Wang, J.-L., Chiou, J.-M. (1998). Relationship of age patterns of fecundity to mortality, longevity, and lifetime reproduction in a large cohort of mediterranean fruit fly females. The Journals of Gerontology Series A, Biological Sciences and Medical Sciences, 53, B245-51.

    Article  Google Scholar 

  • Carlstein, E. (1986). The use of subseries methods for estimating the variance of a general statistic from a stationary time series. Annals of Statistics, 14(3), 1171–1179.

    Article  MathSciNet  Google Scholar 

  • Dehling, H. (1983). Limit theorems for sums of weakly dependent banach space valued random variables. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, 63(2), 393–432.

    Article  MathSciNet  Google Scholar 

  • Dehling, H., Philipp, W. (2002). Empirical Process Techniques for Dependent Data (pp. 3–113). Boston, MA: Birkhäuser Boston.

  • Dette, H., Kokot, K., Aue, A. (2020a). Functional data analysis in the Banach space of continuous functions. Annals of Statistics, 48(2), 1168–1192.

    Article  MathSciNet  Google Scholar 

  • Dette, H., Kokot, K., Volgushev, S. (2020b). Testing relevant hypotheses in functional time series via self-normalization. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 82(3), 629–660.

    Article  MathSciNet  Google Scholar 

  • Ferraty, F., Vieu, P. (2010). Nonparametric functional data analysis. New York: Springer.

    MATH  Google Scholar 

  • Fremdt, S., Steinebach, J. G., Horváth, L., Kokoszka, P. (2013). Testing the equality of covariance operators in functional samples. Scandinavian Journal of Statistics, 40(1), 138–152.

    Article  MathSciNet  Google Scholar 

  • Gaenssler, P., Molnár, P., Rost, D. (2007). On continuity and strict increase of the cdf for the sup-functional of a gaussian process with applications to statistics. Results in Mathematics, 51(1), 51–60.

    Article  MathSciNet  Google Scholar 

  • Guo, J., Zhou, B., Zhang, J.-T. (2018). Testing the equality of several covariance functions for functional data: a supremum-norm based test. Computational Statistics & Data Analysis, 124, 15–26.

    Article  MathSciNet  Google Scholar 

  • Horváth, L., Kokoszka, P. (2012). Inference for functional data with applications. New York: Springer.

    Book  Google Scholar 

  • Hsing, T., Eubank, R. (2015). Theoretical foundations of functional data analysis, with an introduction to linear operators. New York: Wiley.

    Book  Google Scholar 

  • Janson, S., Kaijser, S. (2015). Higher moments of Banach space valued random variables (p. 238). Providence, RI: Memoirs of the American Mathematical Society.

  • Jarušková, D. (2013). Testing for a change in covariance operator. Journal of Statistical Planning and Inference, 143(9), 1500–1511.

    Article  MathSciNet  Google Scholar 

  • Kraus, D., Panaretos, V. M. (2012). Dispersion operators and resistant second-order functional data analysis. Biometrika, 99(4), 813–832.

    Article  Google Scholar 

  • Künsch, H. (1989). The jacknife and the bootstrap for general stationary observations. Annals of Statistics, 17(3), 1217–1241.

    Article  MathSciNet  Google Scholar 

  • Liebl, D., Reimherr, M. (2019). Fast and fair simultaneous confidence bands for functional parameters. arXiv:1910.00131.

  • Panaretos, V. M., Kraus, D., Maddocks, J. H. (2010). Second-order comparison of Gaussian random functions and the geometry of DNA minicircles. Journal of the American Statistical Association, 105(490), 670–682.

    Article  MathSciNet  Google Scholar 

  • Paparoditis, E., Sapatinas, T. (2016). Bootstrap-based testing of equality of mean functions or equality of covariance operators for functional data. Biometrika, 103(3), 727–733.

    Article  MathSciNet  Google Scholar 

  • Pigoli, D., Aston, J. A. D., Dryden, I. L., Secchi, P. (2014). Distances and inference for covariance operators. Biometrika, 101(2), 409–422.

    Article  MathSciNet  Google Scholar 

  • Pilavakis, D., Paparoditis, E., Sapatinas, T. (2020). Testing equality of autocovariance operators for functional time series. Journal of Time Series Analysis, 41, 571–589.

    Article  MathSciNet  Google Scholar 

  • Politis, D., Romano, J. (1994). The stationary bootstrap. Journal of the American Statistical Association, 89(428), 1303–1313.

    Article  MathSciNet  Google Scholar 

  • Ramsay, J. O., Silverman, B. W. (2005). Functional data analysis (second). New York: Springer.

    Book  Google Scholar 

  • Sharipov, O. S., Wendler, M. (2020). Bootstrapping covariance operators of functional time series. Journal of Nonparametric Statistics, 32(3), 648–666.

    Article  MathSciNet  Google Scholar 

  • Stoehr, C., Aston, J. A. D., Kirch, C. (2019). Detecting changes in the covariance structure of functional time series with application to fMRI data. arXiv:1903.00288.

  • Van der Vaart, A. W., Wellner, J. A. (1996). Weak convergence and empirical processes: With applications in statistics. New York: Springer.

    Book  Google Scholar 

  • Zhang, X., Shao, X. (2015). Two sample inference for the second-order property of temporally dependent functional data. Bernoulli, 21(2), 909–929.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This research was partially supported by the Collaborative Research Center “Statistical modeling of nonlinear dynamic processes” (Sonderforschungsbereich 823, Teilprojekt A1, C1) and the Research Training Group “High-dimensional phenomena in probability - fluctuations and discontinuity” (RTG 2131). The authors are grateful to Christina Stoehr for sending us the results of Stoehr et al. (2019) and to Martina Stein, who typed parts of this manuscript with considerable technical expertise. The authors are also grateful to the referees for their constructive comments on an earlier version of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Holger Dette.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Proofs of main results

Appendix: Proofs of main results

1.1 A.1: Proof of Theorem 1

We apply the central limit theorem as formulated in Theorem 2.1 in Dette et al. (2020a) to the sequence of \(C(T^2)\)-valued random variables \(((Z_{j} - \mu )^{\check{\otimes }2})_{j\in {\mathbb {N}}} = (\eta _{j}^{\check{\otimes }2})_{j\in {\mathbb {N}}}\).

It can be easily seen that conditions (A1), (A2) and (A4) in this reference are satisfied. In order to see that the remaining condition (A3) also holds, we use the triangle inequality and Assumption 1 of the present work to obtain, for any \(j\in {\mathbb {N}}\) and \(s,t,s^\prime ,t^\prime \in T\),

$$\begin{aligned} |\eta _{j}(s)\eta _{j}(t) - \eta _{j}(s^\prime )\eta _{j}(t^\prime )|&\le |\eta _{j}(s)(\eta _{j}(t) - \eta _{j}(t^\prime ))| + |\eta _{j}(t^\prime )(\eta _{j}(s) - \eta _{j}(s^\prime ))| \\&\le \Vert \eta _{j}\Vert _\infty \, \big (|\eta _{j}(t) - \eta _{j}(t^\prime )| + |\eta _{j}(s) - \eta _{j}(s^\prime )| \big ) \\&\le \Vert \eta _{j}\Vert _\infty \, M \, \big ( \rho (t, t^\prime ) + \rho (s, s^\prime ) \big ) \\&\lesssim \Vert \eta _{j}\Vert _\infty \, M \, \rho _{\max } \big ((t,s), (t^\prime , s^\prime ) \big ) \end{aligned}$$

where \(\mathbb {E}\big [ (\Vert \eta _{j}\Vert _\infty \, M )^J \big ] \le \tilde{K} < \infty \) by (A3). Now observe that

$$\begin{aligned} \frac{1}{\sqrt{n}} \sum _{j=1}^n (Z_{j} - \bar{Z}_{n})^{\check{\otimes }2} = \frac{1}{\sqrt{n}} \sum _{j=1}^n \eta _{j}^{\check{\otimes }2} -\frac{1}{\sqrt{n}} \bigg (\frac{1}{\sqrt{n}} \sum _{j=1}^n \eta _{j} \bigg )^{\check{\otimes }2} = \frac{1}{\sqrt{n}} \sum _{j=1}^n \eta _{j}^{\check{\otimes }2} + o_{{\mathbb {P}}}(1). \end{aligned}$$

Here the error \(o_{\mathbb {P}}(1)\) refers to the supremum norm, because by Theorem 2.1 in Dette et al. (2020a) the sequence \(\big (\frac{1}{\sqrt{n}} \sum _{j=1}^n \eta _{j} \big )_{n \in {\mathbb {N}}} \) converges weakly in C([0, 1]) and by continuous mapping \( \Vert \frac{1}{\sqrt{n}} \sum _{j=1}^n \eta _{j}^{\check{\otimes }2} \Vert _\infty \) is of order \( O_{\mathbb {P}}(1)\), which yields \(\frac{1}{\sqrt{n}}\Vert (\frac{1}{\sqrt{n}} \sum _{j=1}^n \eta _{j})^{\check{\otimes }2} \Vert _\infty = o_{{\mathbb {P}}} (1)\). Moreover, as shown above, Theorem 2.1 in Dette et al. (2020a) can also be applied to the sequence \(( \eta _{j}^{\check{\otimes }2} )_{j\in {\mathbb {N}}}\), which yields the claim of Theorem 1. \(\square \)

1.2 A.2: Proof of Proposition 1

As the samples are independent, it directly follows from Theorem 1 that

$$\begin{aligned} \sqrt{m+n}&\bigg ( \frac{1}{m} \sum _{j=1}^m ({\tilde{X}}_{m,j}^{\check{\otimes }2} - C_1 ), \, \frac{1}{n} \sum _{j=1}^n (\tilde{Y}_{n,j}^{\check{\otimes }2} - C_2 ) \bigg ) \\&= \sqrt{m+n} \bigg ( \frac{1}{m} \sum _{j=1}^m (\eta _{1,j}^{\check{\otimes }2} - C_1 ), \, \frac{1}{n} \sum _{j=1}^n (\eta _{2,j}^{\check{\otimes }2} - C_2 ) \bigg ) \\&\quad + o_{\mathbb {P}}(1) \rightsquigarrow \bigg ( \frac{1}{\sqrt{\lambda }} ~ Z_1, \frac{1}{\sqrt{1-\lambda }} ~ Z_2 \bigg ) \end{aligned}$$

in \(C([0,1]^2)^2\) as \(m,n\rightarrow \infty \), where \(Z_1\) and \(Z_2\) are independent, centred Gaussian processes defined by their long-run covariance operators (14) and (15). By the continuous mapping theorem it follows that

$$\begin{aligned} Z_{m,n} = \sqrt{m+n} \, \bigg ( \frac{1}{m} \sum _{j=1}^m {\tilde{X}}_{m,j}^{\check{\otimes }2} - \, \frac{1}{n} \sum _{j=1}^n \tilde{Y}_{n,j}^{\check{\otimes }2} - (C_1 - C_2) \bigg ) \rightsquigarrow Z \end{aligned}$$
(50)

in \(C([0,1]^2)\) as \(m,n\rightarrow \infty \) (the error \(o_{\mathbb {P}}(1)\) refers again to the supremum norm), where Z is again a centred Gaussian process with covariance operator (13).

If \(d_\infty = 0\), the convergence in (50) together with the continuous mapping yield (12). If \(d_\infty > 0\), the asymptotic distribution of \({\hat{d}}_\infty \) can be deduced from Theorem B.1 in the online supplement of Dette et al. (2020a) or alternatively from the results in Cárcamo et al. (2020). \(\square \)

1.3 A.3: Proof of Theorem 2 and 3

Proof of Theorem 2. Using similar arguments as in the proof of Theorem 1, it follows that the process \( {\hat{B}}^{(r)}_{m,n}\) in (18) admits the stochastic expansion

$$\begin{aligned} {\hat{B}}^{(r)}_{m,n}&= \sqrt{n+m} \bigg \{ \frac{1}{m} \sum _{k=1}^{m-l_1+1} \frac{1}{\sqrt{l_1}}\bigg ( \sum _{j=k}^{k+l_1-1} \eta _{1,j}^{\check{\otimes }2} -\frac{l_1}{m}\sum _{i=1}^m \eta _{1,j}^{\check{\otimes }2} \bigg ) \xi _k^{(r)} \\&\quad - \frac{1}{n} \sum _{k=1}^{n-l_2+1} \frac{1}{\sqrt{l_2}}\bigg ( \sum _{j=k}^{k+l_2-1} \eta _{2,j}^{\check{\otimes }2} -\frac{l_2}{n}\sum _{i=1}^n \eta _{2,j}^{\check{\otimes }2} \bigg ) \zeta _k^{(r)} \bigg \} { ~+ ~O \big ( R_{m,n} \big ) }, \end{aligned}$$

where the remainder is defined by \(R_{m,n}=R^{(1)}_m - R^{(2)}_n\) with

$$\begin{aligned} R^{(1)}_m= & {} \frac{1}{\sqrt{m}} \sum _{k=1}^{m-l_1+1} \frac{1}{\sqrt{l_1}} \left( - \sum _{j=k}^{k+l_1-1} \eta _{1,j} \check{\otimes }\bar{\eta }_1 - \bar{\eta }_1 \check{\otimes }\sum _{j=k}^{k+l_1-1} \eta _{1,j} + 2l_1 \bar{\eta }_1^{\check{\otimes }2} \right) \xi ^{(r)}_k , \end{aligned}$$
(51)
$$\begin{aligned} R^{(2)}_n= & {} \frac{1}{\sqrt{n}} \sum _{k=1}^{n-l_2+1} \frac{1}{\sqrt{l_2}} \left( - \sum _{j=k}^{k+l_2-1} \eta _{2,j} \check{\otimes }\bar{\eta }_2 - \bar{\eta }_2 \check{\otimes }\sum _{j=k}^{k+l_2-1} \eta _{2,j} + 2l_2 \bar{\eta }_2^{\check{\otimes }2} \right) \xi ^{(r)}_k. \end{aligned}$$
(52)

Because both terms have a similar structure, we consider only the first one. Note that it is easy to see that \(\Vert \bar{\eta }_1^{\check{\otimes }2}\Vert = O_{\mathbb {P}}(\frac{1}{m})\) and therefore the third term in (51) is of order \(O_{\mathbb {P}} \big (\sqrt{\frac{l_1}{m}}\big ) = o_{\mathbb {P}}(1)\). The first and second term can be treated in the same way and we only consider the first one. It follows from the proof of Theorem 4.3 in Dette et al. (2020a) that the term

$$\begin{aligned} \left\| \frac{1}{\sqrt{m}} \sum _{k=1}^{m-l_1+1} \frac{1}{\sqrt{l_1}}\left( \sum _{j=k}^{k+l_1-1} \eta _{1,j} \right) \xi ^{(r)}_k \right\| _\infty \end{aligned}$$

is of order \(O_{\mathbb {P}}(1) \). By Theorem 2.1 in the same reference the second factor of the tensor satisfies \(\Vert \bar{\eta }_1\Vert _\infty = O_\mathbb {P}(\frac{1}{\sqrt{m}})\), and therefore the first term in (51) is of order \(o_\mathbb {P}(1)\). Using similar arguments for the second term in (51) and the summand \(R^{(2)}_n\) yields

$$\begin{aligned} R_{m,n}= o_\mathbb {P}(1). \end{aligned}$$

Next, note that the sequences \((\eta _{1,j}^{\check{\otimes }2})_{j\in {\mathbb {N}}}\) and \((\eta _{2,j}^{\check{\otimes }2})_{j\in {\mathbb {N}}}\) satisfy Assumption 2.1 in Dette et al. (2020a).

Thus, similar arguments as in the proof of Theorem 3.3 and 4.3 in the same reference yield

$$\begin{aligned} \big ( Z_{m,n} , {\hat{B}}_{m,n}^{(1)},\dots ,{\hat{B}}_{m,n}^{(R)}\big ) \rightsquigarrow (Z, Z^{(1)},\ldots ,Z^{(R)}) \end{aligned}$$
(53)

in \(C([0,1]^2)^{R+1}\) as \(m,n \rightarrow \infty \) where the process \(Z_{m,n}\) is defined in (50) and the random functions \(Z^{(1)},\ldots ,Z^{(R)}\) are independent copies of Z which is also defined in (50). Note that in this paper the authors prove weak convergence of a vector in \(C([0,1])^{R+1}\). The proof of weak convergence of the finite dimensional distributions can be directly transferred to vectors in \(C([0,1]^2)^{R+1}\) , while the proof of equicontinuity requires condition (A1) in Assumption 1, which reduces for the space \(C([0,1]^2)\) to (6). If \(d_\infty = 0\), the continuous mapping theorem implies

$$\begin{aligned} \big ( \sqrt{m+n} \, \hat{d}_{\infty } ,~ T_{m,n}^{(1)},\ldots ,T_{m,n}^{(R)}\big ) {\mathop {\longrightarrow }\limits ^{\mathcal {D}}} (T,~ T^{(1)},\ldots ,T^{(R)}) \end{aligned}$$
(54)

in \({\mathbb {R}}^{R+1}\) as \(m,n \rightarrow \infty \) where the statistic \(\hat{d}_\infty \) is defined by (11), the bootstrap statistics \(T_{m,n}^{(1)},\ldots ,T_{m,n}^{(R)}\) are defined by (19) and the random variables \(T^{(1)},\ldots ,T^{(R)}\) are independent copies of T which is defined by (12). Now, Lemma 4.2 in Bücher and Kojadinovic (2019) directly implies (21), that is,

$$\begin{aligned} \lim _{m,n,R\rightarrow \infty } \mathbb {P}\bigg ( \hat{d}_{\infty } > \frac{T_{m,n}^{\{\lfloor R(1-\alpha )\rfloor \}}}{\sqrt{m+n}} \bigg ) = \alpha . \end{aligned}$$

For the application of this result, it is required that the distribution of the random variable T has a continuous distribution function, which follows from Gaenssler et al. (2007). In order to show the consistency of test (20) in the case \(d_\infty >0\), write

$$\begin{aligned} \mathbb {P}\bigg ( \hat{d}_{\infty }> \frac{T_{m,n}^{\{\lfloor R(1-\alpha )\rfloor \}}}{\sqrt{m+n}} \bigg )&= \mathbb {P}\big ( \sqrt{m+n} \, (\hat{d}_{\infty } - d_{\infty }) + \sqrt{m+n} \, d_{\infty } > T_{m,n}^{\{\lfloor R(1-\alpha )\rfloor \}} \big ) \end{aligned}$$

and note that, given (54) and (16), the assertion in (22) follows by simple arguments. \(\square \)

Proof of Theorem 3. First note that the same arguments as in the proof of Theorem 3.6 in Dette et al. (2020a) show that the estimators of the extremal sets defined by (23) are consistent that is

$$\begin{aligned} d_H( \hat{\mathcal {E}}_{m,n}^\pm , \mathcal {E}^\pm ) \xrightarrow [m,n\rightarrow \infty ]{{\mathbb {P}}} 0, \end{aligned}$$

where \(d_H\) denotes the Hausdorff distance. Thus, given the convergence in (53), the arguments in the proof of Theorem 3.7 in the same reference yield

$$\begin{aligned} \big ( \sqrt{n+m} ~ (\hat{d}_\infty - d_\infty ) ,~ K_{m,n}^{(1)},\ldots ,K_{m,n}^{(R)}\big ) {\mathop {\longrightarrow }\limits ^{\mathcal {D}}} (T(\mathcal {E}),~ T^{(1)}(\mathcal {E}),\ldots ,T^{(R)}(\mathcal {E})) \end{aligned}$$
(55)

in \({\mathbb {R}}^{R+1}\) as \(m,n \rightarrow \infty \) where the statistic \(\hat{d}_\infty \) is defined by (11), the bootstrap statistics \(K_{m,n}^{(1)},\ldots ,K_{m,n}^{(R)}\) are defined by (24) and the random variables \(T^{(1)}({\mathcal {E}}),\ldots ,T^{(R)}({\mathcal {E}})\) are independent copies of \(T(\mathcal {E})\) which is defined by (16). Note that this convergence holds true under the null and the alternative hypothesis.

If \(\varDelta = d_\infty \), Lemma 4.2 in Bücher and Kojadinovic (2019) directly implies (26) and again the results in Gaenssler et al. (2007) ensure that the limit \(T(\mathcal {E})\) has a continuous distribution function.

If \(\varDelta \ne d_\infty \), write

$$\begin{aligned} \mathbb {P}\bigg ( \hat{d}_{\infty }> \varDelta + \frac{K_{m,n}^{\{\lfloor R(1-\alpha )\rfloor \}}}{\sqrt{n+m}} \bigg ) = \mathbb {P}\big ( \sqrt{m+n} \, (\hat{d}_{\infty } - d_{\infty }) + \sqrt{m+n} \, (d_{\infty } - \varDelta ) > K_{m,n}^{\{\lfloor R(1-\alpha )\rfloor \}} \big ). \end{aligned}$$

Then, it follows from (55) and simple arguments that, for any \(R\in {\mathbb {N}}\),

$$\begin{aligned} \lim _{m,n\rightarrow \infty } \mathbb {P}\bigg ( \hat{d}_{\infty }> \varDelta + \frac{K_{m,n}^{\{\lfloor R(1-\alpha )\rfloor \}}}{\sqrt{n+m}} \bigg ) = 0 \quad \text {and} \quad \liminf _{m,n\rightarrow \infty } \mathbb {P}\bigg ( \hat{d}_{\infty } > \varDelta + \frac{K_{m,n}^{\{\lfloor R(1-\alpha )\rfloor \}}}{\sqrt{n+m}} \bigg ) = 1 \end{aligned}$$

if \(\varDelta > d_\infty \) and \(\varDelta < d_\infty \), respectively. This proves the remaining assertions of Theorem 3. \(\square \)

1.4 A.4: Proof of Proposition 2

Let \(C_{n,j}\) denote the covariance operator of \(X_{n,j}\) defined by \(C_{n,j}(s,t) = \text {Cov}(X_{n,j}(s),X_{n,j}(t))\) and consider the sequential process

$$\begin{aligned} \hat{\mathbb {V}}_n (s)&= \frac{1}{\sqrt{n}} \sum ^{\lfloor s n \rfloor }_{j=1} ({\tilde{X}}_{n,j}^{\check{\otimes }2} - C_{n,j}) + \sqrt{n} \left( s \, - \frac{\lfloor s n \rfloor }{n} \right) \left( {\tilde{X}}_{n, \lfloor s n \rfloor +1}^{\check{\otimes }2} - C_{n,\lfloor s n \rfloor +1} \right) \, \\&= \frac{1}{\sqrt{n}} \sum ^{\lfloor s n \rfloor }_{j=1} ({\tilde{\eta }}_{n,j}^{\check{\otimes }2} - C_{n,j}) + \sqrt{n} \left( s \, - \frac{\lfloor s n \rfloor }{n} \right) \left( {\tilde{\eta }}_{n, \lfloor s n \rfloor +1}^{\check{\otimes }2} - C_{n,\lfloor s n \rfloor +1} \right) + o_{\mathbb {P}}(1) \end{aligned}$$

which is an element of \(C([0,1], C([0,1]^2))\). Here the order \( o_{\mathbb {P}}(1)\) for the remainder is obtained by similar arguments as given at the beginning of the proof of Theorem 2 and the details are omitted for the sake of brevity. Note that \(\{\hat{\mathbb {V}}_n(s)\}_{s\in [0,1]}\) can equivalently be regarded as an element of \(C([0,1]^3)\) and we have the representation

$$\begin{aligned} \hat{\mathbb {V}}_n = \tilde{\mathbb {V}}_{1,n} + \tilde{\mathbb {V}}_{2,n} , \end{aligned}$$
(56)

where the processes \( \tilde{\mathbb {V}}_{1,n}, \tilde{\mathbb {V}}_{2,n} \in C([0,1]^3)\) are defined by

$$\begin{aligned} \tilde{\mathbb {V}}_{1,n}(s,t,u)&= \, \hat{\mathbb {V}}_{1,n}(s,t,u) \mathbb {1}\{\lfloor sn \rfloor < \lfloor s^* n \rfloor \} + \hat{\mathbb {V}}_{1,n}(\lfloor s^* n \rfloor /n,t,u) \mathbb {1}\{\lfloor sn \rfloor \ge \lfloor s^* n \rfloor \} \\ \tilde{\mathbb {V}}_{2,n}(s,t,u)&= \, (\hat{\mathbb {V}}_{2,n}(s,t,u) - \hat{\mathbb {V}}_{2,n}(\lfloor s^* n \rfloor /n,t,u)) \mathbb {1}\{\lfloor sn \rfloor \ge \lfloor s^* n \rfloor \} \end{aligned}$$

(\(s,t,u\in [0,1]\)) and

$$\begin{aligned} \hat{\mathbb {V}}_{l,n}(s) = \frac{1}{\sqrt{n}} \sum ^{\lfloor s n \rfloor }_{j=1} (\eta _{n,j}^{\check{\otimes }2} - C_{l}) + \sqrt{n} \Big (s \, - \frac{\lfloor s n \rfloor }{n} \Big ) \big (\eta _{n, \lfloor s n \rfloor +1}^{\check{\otimes }2} - C_{l} \big ) \quad (l = 1,2). \end{aligned}$$

Recall the definition of the array (\(\tilde{\eta }_{n,j} :n\in {\mathbb {N}}, j = 1,\ldots , n\)) in (27). By Theorem 2.2 in Dette et al. (2020a) it follows that

$$\begin{aligned} \hat{\mathbb {V}}_{l,n} \rightsquigarrow \mathbb {V}_l \quad (l = 1,2) \end{aligned}$$

in \(C([0,1]^3)\), where \(\mathbb {V}_l\) is a centred Gaussian measure on \(C([0,1]^3)\) characterized by the covariance operator

$$\begin{aligned} \text {Cov}\big (\mathbb {V}_l(s,t,u), \mathbb {V}_l(s^\prime ,t^\prime ,u^\prime ) \big )&= (s \wedge s^\prime ) \, \mathbb {C}_l((t,u),(t^\prime , u^\prime )), \quad l = 1,2 \end{aligned}$$

and the long-run covariance operator \(\mathbb C_l\) is defined in (32). From the continuous mapping theorem, we obtain

$$\begin{aligned} \tilde{\mathbb {V}}_{l,n} \rightsquigarrow \tilde{\mathbb {V}}_l \quad \quad (l = 1,2) \end{aligned}$$
(57)

in \(C([0,1]^3)\), where \(\tilde{\mathbb {V}}_1, \tilde{\mathbb {V}}_2\) are centred Gaussian measures on \(C([0,1]^3)\) characterized by

$$\begin{aligned} \tilde{\mathbb {V}}_1 (s,t,u) = \mathbb {V}_{1}(s \wedge s^*,t,u) \, , \quad \tilde{\mathbb {V}}_2 (s,t,u) = (\mathbb {V}_{2}(s,t,u) - \mathbb {V}_{2}(s^*,t,u)) \mathbb {1}\{ s \ge s^* \} \end{aligned}$$

with covariance operators

$$\begin{aligned} \text {Cov}\big (\tilde{\mathbb {V}}_1(s,t,u), \tilde{\mathbb {V}}_1(s^\prime ,t^\prime ,u^\prime ) \big )&= (s \wedge s^\prime \wedge s^*) \, \mathbb {C}_1((t,u),(t^\prime , u^\prime )) \\ \text {Cov}\big (\tilde{\mathbb {V}}_2(s,t,u), \tilde{\mathbb {V}}_2(s^\prime ,t^\prime ,u^\prime ) \big )&= (s\wedge s^\prime - s^*)_+ \, \mathbb {C}_2((t,u),(t^\prime , u^\prime )) \, . \end{aligned}$$

In the following we will show the weak convergence

$$\begin{aligned} \hat{\mathbb {V}}_n \rightsquigarrow \mathbb {V} \end{aligned}$$
(58)

in \(C([0,1]^3)\) as \(n\rightarrow \infty \), where \(\mathbb {V}\in C([0,1]^3)\) is a centred Gaussian random variable characterized by its covariance operator

$$\begin{aligned} \text {Cov}(\mathbb {V}(s,t,u), \mathbb {V}(s^\prime ,t^\prime ,u^\prime )) = (s\wedge s^\prime \wedge s^*) \, \mathbb {C}_1((t,u), (t^\prime , u^\prime )) + (s\wedge s^\prime - s^*)_+ \, \mathbb {C}_2((t,u), (t^\prime , u^\prime )) \end{aligned}$$

and the long-run covariance operators \(\mathbb {C}_1, \mathbb {C}_2\) are defined by (32). The convergence in (57) implies that the processes \(\tilde{\mathbb {V}}_{1,n}, \tilde{\mathbb {V}}_{2,n}\) are asymptotically tight and the representation in (56) yields that \(\hat{\mathbb {V}}_{n}\) is asymptotically tight as well (see Section 1.5 in Van der Vaart and Wellner 1996). In order to prove the convergence in (58), it consequently remains to show the convergence of the finite-dimensional distributions. For this, we utilize the Crámer–Wold device and show that

$$\begin{aligned} \tilde{Z}_n&=\sum _{j=1}^q c_j \hat{\mathbb {V}}_n (s_j,t_j,u_j)&= \sum _{j=1}^q c_j \big \{ \tilde{\mathbb {V}}_{1,n}(s_j,t_j,u_j) + \tilde{\mathbb {V}}_{2,n}(s_j,t_j,u_j) \big \} \\&{\mathop {\longrightarrow }\limits ^{\mathcal {D}}} \tilde{Z} = \sum _{j=1}^q c_j \mathbb {V}(s_j ,t_j,u_j) \end{aligned}$$

for any \((s_1,t_1,u_1),\dots ,(s_q,t_q,u_q) \in [0,1]^3\), \(c_1,\ldots ,c_q \in {\mathbb {R}}\) and \(q\in {\mathbb {N}}\). Asymptotic normality of \(\tilde{Z}_n\) can be proved by the same arguments as in the proof of Theorem 2.1 in Dette et al. (2020a), and it remains to show that the variance of the random variable \(\tilde{Z}_n\) converges to the variance of \(\tilde{Z}\). Using (3.17) in Dehling and Philipp (2002) and assumptions (A2) and (A4) we obtain for any \((s,t,u),(s^\prime , t^\prime , u^\prime ) \in [0,1]^3\)

$$\begin{aligned} \begin{aligned}&\text {Cov}(\tilde{\mathbb {V}}_{1,n} (s,t,u), \tilde{\mathbb {V}}_{2,n} (s^\prime ,t^\prime ,u^\prime ) ) \\&\quad = \frac{1}{n} \sum ^{\lfloor (s \wedge s^*) n \rfloor }_{j=1} \sum ^{\lfloor s^\prime n \rfloor }_{i=\lfloor s^* n \rfloor + 1} \text {Cov}(\tilde{\eta }_{n,j}^{\check{\otimes }2}(t,u), \tilde{\eta }_{n,i}^{\check{\otimes }2}(t^\prime ,u^\prime )) + o(1) \\&\quad \lesssim \frac{1}{n} \sum ^{\lfloor (s \wedge s^*) n \rfloor }_{j=1} \sum ^{\lfloor s^\prime n \rfloor }_{i=\lfloor s^* n \rfloor + 1} \Vert \tilde{\eta }_{n,j}^{\check{\otimes }2}(t,u)\Vert _2 \, \Vert \tilde{\eta }_{n,i}^{\check{\otimes }2}(t^\prime ,u^\prime )\Vert _2 \, \varphi (i-j)^{1/2} + o(1) \\&\quad \lesssim \frac{1}{n} \sum ^{\lfloor (s \wedge s^*) n \rfloor }_{j=1} \sum ^{\lfloor s^\prime n \rfloor }_{i=\lfloor s^* n \rfloor + 1} \varphi (i-j)^{1/2} + o(1) \lesssim \frac{1}{n} \sum ^{\lfloor s^\prime n \rfloor - 1}_{i = 1} i \varphi (i)^{1/2} + o(1){\longrightarrow _{n\rightarrow \infty }} 0, \end{aligned} \end{aligned}$$
(59)

where the symbol “\(\lesssim \)” means less or equal up to a constant independent of n, and \(\Vert X\Vert _2 = \mathbb {E}[X^2]^{1/2}\) denotes the \(L^2\)-norm of a real-valued random variable X (also note that we implicitly assume \(\sum _{i=j}^k a_i = 0\) if \(k<j\)). Furthermore, assuming without loss of generality that \(s \le s^\prime \), we have

$$\begin{aligned}&\text {Cov}(\tilde{\mathbb {V}}_{1,n} (s,t,u), \tilde{\mathbb {V}}_{1,n} (s^\prime ,t^\prime ,u^\prime ) ) = \frac{1}{n} \sum ^{\lfloor (s\wedge s^*) n \rfloor }_{j=1} \sum ^{\lfloor (s^\prime \wedge s^*) n \rfloor }_{i= 1} \text {Cov}(\eta _{1,j}^{\check{\otimes }2}(t,u), \eta _{1,i}^{\check{\otimes }2}(t^\prime ,u^\prime )) + o(1) \\&\quad = \frac{1}{n} \sum ^{\lfloor (s\wedge s^*) n \rfloor }_{j=1} \left( \sum ^{\lfloor (s \wedge s^*) n \rfloor }_{i= 1} + \sum ^{\lfloor (s^\prime \wedge s^*) n \rfloor }_{i=\lfloor (s \wedge s^*) n \rfloor + 1} \right) \text {Cov}(\eta _{1,j}^{\check{\otimes }2}(t,u), \eta _{1,i}^{\check{\otimes }2}(t^\prime ,u^\prime )) + o(1) \\&\quad = \frac{1}{n} \sum ^{\lfloor (s\wedge s^*) n \rfloor }_{j=1} \sum ^{\lfloor (s \wedge s^*) n \rfloor }_{i= 1} \text {Cov}(\eta _{1,j}^{\check{\otimes }2}(t,u), \eta _{1,i}^{\check{\otimes }2}(t^\prime ,u^\prime )) + o(1), \end{aligned}$$

where the last equality follows by the same arguments as used in (59). For the remaining expression we use the dominated convergence theorem to obtain

$$\begin{aligned}&\frac{1}{n} \sum ^{\lfloor (s\wedge s^*) n \rfloor }_{j=1} \sum ^{\lfloor (s \wedge s^*) n \rfloor }_{i= 1} \text {Cov}(\eta _{1,j}^{\check{\otimes }2}(t,u), \eta _{1,i}^{\check{\otimes }2}(t^\prime ,u^\prime )) \\&= \sum ^{\lfloor (s \wedge s^*) n \rfloor -1}_{i= -(\lfloor (s \wedge s^*) n \rfloor -1)} \frac{\lfloor (s \wedge s^*) n \rfloor - |i|}{n} \, \text {Cov}(\eta _{1,0}^{\check{\otimes }2}(t,u), \eta _{1,i}^{\check{\otimes }2}(t^\prime ,u^\prime )) {\longrightarrow _{{n \rightarrow \infty }}} (s \wedge s^*) \, \mathbb {C}_1((t,u), (t^\prime , u^\prime )) \end{aligned}$$

which means that for any \((s,t,u),(s^\prime , t^\prime , u^\prime ) \in [0,1]^3\)

$$\begin{aligned} \text {Cov}(\tilde{\mathbb {V}}_{1,n} (s,t,u), \tilde{\mathbb {V}}_{1,n} (s^\prime ,t^\prime ,u^\prime ) ) {\longrightarrow }{}{n\rightarrow \infty } (s \wedge s^\prime \wedge s^*) \, \mathbb {C}_1((t,u), (t^\prime , u^\prime )) . \end{aligned}$$

By similar arguments we obtain

$$\begin{aligned} \text {Cov}(\tilde{\mathbb {V}}_{2,n} (s,t,u), \tilde{\mathbb {V}}_{2,n} (s^\prime ,t^\prime ,u^\prime ) ) {\longrightarrow }{}{n\rightarrow \infty } (s \wedge s^\prime - s^*)_+ \, \mathbb {C}_2((t,u), (t^\prime , u^\prime )) \end{aligned}$$

and therefore we have

$$\begin{aligned} \text {Var}(\tilde{Z}_n )&= \sum _{j=1}^q \sum _{j^\prime =1}^q c_j c_{j^\prime } \text {Cov}(\hat{\mathbb {V}}_n (s_j,t_j,u_j), \hat{\mathbb {V}}_n (s_{j^\prime },t_{j^\prime },u_{j^\prime }) ) \\&= \sum _{j=1}^q \sum _{j^\prime =1}^q c_j c_{j^\prime } \big \{ \text {Cov}(\tilde{\mathbb {V}}_{1,n} (s_j,t_j,u_j), \tilde{\mathbb {V}}_{1,n} (s_{j^\prime },t_{j^\prime },u_{j^\prime }) ) \\&\quad + \text {Cov}(\tilde{\mathbb {V}}_{2,n} (s_j,t_j,u_j), \tilde{\mathbb {V}}_{2,n} (s_{j^\prime },t_{j^\prime },u_{j^\prime }) ) \big \} + o(1) \\&{\longrightarrow }_{n\rightarrow \infty } \sum _{j=1}^q \sum _{j^\prime =1}^q c_j c_{j^\prime } \text {Cov}(\mathbb {V} (s_j,t_j,u_j), \mathbb {V} (s_{j^\prime },t_{j^\prime },u_{j^\prime }) ) = \text {Var}(\tilde{Z}) \end{aligned}$$

which finally proves (58).

Next we define the \(C([0,1]^3)\)-valued process

$$\begin{aligned} \hat{\mathbb {W}}_n(s,t,u) = \hat{\mathbb {V}}_n(s,t,u) - s \hat{\mathbb {V}}_n(1,t,u) \, , \qquad s,t,u \in [0,1] \, , \end{aligned}$$
(60)

then the convergence in (58) and the continuous mapping theorem yield

$$\begin{aligned} \hat{\mathbb {W}}_n \rightsquigarrow \mathbb {W} \end{aligned}$$
(61)

in \(C([0,1]^3)\), where \(\mathbb {W}\) is centred Gaussian defined by \( \mathbb {W}(s,t,u)= \mathbb {V}(s,t,u) - s \mathbb {V}(1,t,u) \) with covariance operator given by (31). Finally, recall the definition of the process \((\hat{\mathbb {U}}_{n}:n\in \mathbb {N})\) in (28) and note that, in contrast to \(\hat{\mathbb {W}}_n\), this process is not centred. Consequently, if \(d_\infty = 0\), we have \(\sqrt{n}\, \mathbb {U}_n = \hat{\mathbb {W}}_n\) and the convergence in (61) and the continuous mapping theorem directly yield (30).

If \(d_\infty > 0\), assertion (33) is a consequence of the weak convergence in (61) and Theorem B.1 in the online supplement of Dette et al. (2020a) and also of the results in Cárcamo et al. (2020). \(\square \)

1.5 A.5: Proof of Theorem 4 and 5

Proof of Theorem 4. Recalling the definition of the bootstrap processes in (34) it can be shown by similar arguments as given at beginning of the proof of Theorem 2 that

$$\begin{aligned} \sup _{s,t,u \in [0,1[^{3}} | \hat{B}_n^{(r)}(s,t,u) - \hat{C}_n^{(r)}(s,t,u) | = o_{\mathbb {P}}(1) \end{aligned}$$
(62)

(for \(r = 1,\dots , R\)), where

$$\begin{aligned} \begin{aligned} \hat{C}_n^{(r)}(s,t,u)&= \frac{1}{\sqrt{n}} \sum _{k=1}^{\lfloor sn \rfloor } \frac{1}{\sqrt{l}} \left( \sum _{j=k}^{k+l-1} \tilde{Y}_{n,j}(t,u) - \frac{l}{n} \sum _{j=1}^n \tilde{Y}_{n,j}(t,u) \right) \xi _k^{(r)} \\&+ \sqrt{n}\left( s - \frac{\lfloor sn \rfloor }{n} \right) \frac{1}{\sqrt{l}} \left( \sum _{j=\lfloor sn \rfloor +1}^{\lfloor sn \rfloor +l} \tilde{Y}_{n,j}(t,u) - \frac{l}{n} \sum _{j=1}^n \tilde{Y}_{n,j}(t,u) \right) \xi _{\lfloor sn \rfloor +1}^{(r)} \end{aligned}, \end{aligned}$$

\( \tilde{Y}_{n,j} = \tilde{\eta }_{n,j}^{\check{\otimes }2}(t,u) - (\hat{C}_2 - \hat{C}_1) \mathbb {1}\{j > \lfloor \hat{s}n \rfloor \} \) (\(j=1,\dots ,n\)). The array \((\tilde{\eta }_{n,j}^{\check{\otimes }2} \, :n\in {\mathbb {N}}, ~ j = 1,\ldots , n)\) satisfies (A1), (A3) and (A4) of Assumption 2.1 in Dette et al. (2020a). The convergence in (61) and similar arguments as in the proof of Theorem 4.3 in the same reference show

$$\begin{aligned} (\hat{\mathbb {V}}_n, \hat{B}_n^{(1)},\ldots , \hat{B}_n^{(R)}) \rightsquigarrow (\mathbb {V}, \mathbb {V}^{(1)},\ldots ,\mathbb {V}^{(R)}) \, \end{aligned}$$
(63)

in \(C([0,1]^3)^{R+1}\) as \(n \rightarrow \infty \), where the process \(\mathbb {V}\) is defined in (57) and \(\mathbb {V}^{(1)},\dots ,\mathbb {V}^{(R)}\) are independent copies of \(\mathbb {V}\). For the sake of completeness we repeat the necessary main steps here, which are proved using analogous arguments as given in Dette et al. (2020a). First we define \( {Y}_{n,j}(t,u) = \tilde{\eta }_{n,j}^{\check{\otimes }2}(t,u) - ({C}_2 - {C}_1) \mathbb {1}\{j > \lfloor {s}^{*} n \rfloor \} \) and show the approximation

$$\begin{aligned} \sup _{s,t,u \in [0,1[^{3}} \big | \hat{C}_n^{(r)}(s,t,u) - \bar{C}_n^{(r)}(s,t,u) \big | = o_{\mathbb {P}}(1) , \end{aligned}$$
(64)

where the process \( \bar{C}_n^{(r)}\) is defined by

$$\begin{aligned} \begin{aligned} {\bar{C}}_n^{(r)}(s,t,u)&= \frac{1}{\sqrt{n}} \sum _{k=1}^{\lfloor sn \rfloor } \frac{1}{\sqrt{l}} \left( \sum _{j=k}^{k+l-1} {Y}_{n,j}(t,u) - \frac{l}{n} \sum _{j=1}^n {Y}_{n,j}(t,u) \right) \xi _k^{(r)} \\&+ \sqrt{n}\left( s - \frac{\lfloor sn \rfloor }{n} \right) \frac{1}{\sqrt{l}} \left( \sum _{j=\lfloor sn \rfloor +1}^{\lfloor sn \rfloor +l} {Y}_{n,j}(t,u) - \frac{l}{n} \sum _{j=1}^n {Y}_{n,j}(t,u) \right) \xi _{\lfloor sn \rfloor +1}^{(r)} \end{aligned}. \end{aligned}$$

In a second step we show

$$\begin{aligned} \sup _{s,t,u \in [0,1[^{3}} \big | \bar{C}_n^{(r)}(s,t,u) - \tilde{C}_n^{(r)}(s,t,u) \big | = o_{\mathbb {P}}(1) , \end{aligned}$$
(65)

where the process \( \tilde{C}_n^{(r)}\) is defined by

$$\begin{aligned} \begin{aligned} \tilde{C}_n^{(r)}(s,t,u)&= \frac{1}{\sqrt{n}} \sum _{k=1}^{\lfloor sn \rfloor } \frac{1}{\sqrt{l}} \left( \sum _{j=k}^{k+l-1} \left( {Y}_{n,j}(t,u) - C_{1} (t,u) \right) \right) \xi _k^{(r)} \\&+ \sqrt{n}\left( s - \frac{\lfloor sn \rfloor }{n} \right) \frac{1}{\sqrt{l}} \left( \sum _{j=\lfloor sn \rfloor +1}^{\lfloor sn \rfloor +l} \left( {Y}_{n,j}(t,u) - C_{1} (t,u) \right) \right) \xi _{\lfloor sn \rfloor +1}^{(r)} \end{aligned}. \end{aligned}$$

In a third step one notes that

$$\begin{aligned} {Y}_{n,j}(t,u) = {\left\{ \begin{array}{ll} \tilde{\eta }_{n,j}^{\check{\otimes }2}(t,u) - {C}_1 &{} \text { if } j \le \lfloor {s}^{*} n \rfloor \ \\ \tilde{\eta }_{n,j}^{\check{\otimes }2}(t,u) - {C}_2 &{} \text { if } j > \lfloor {s}^{*} n \rfloor \end{array}\right. } \end{aligned}$$

and shows the weak convergence

$$\begin{aligned} (\hat{\mathbb {V}}_n,\tilde{C}_n^{(1)},\ldots ,\tilde{C}_n^{(R)}) \rightsquigarrow (\mathbb {V}, \mathbb {V}^{(1)},\ldots ,\mathbb {V}^{(R)}) \, \end{aligned}$$

in \(C([0,1]^3)^{R+1}\) as \(n \rightarrow \infty \), where the process \(\mathbb {V}\) is defined in (57) and \(\mathbb {V}^{(1)},\ldots ,\mathbb {V}^{(R)}\) are independent copies of \(\mathbb {V}\). Observing (62), (64) and (65) then proves the weak convergence in (63) Finally, this result and the continuous mapping theorem yield

$$\begin{aligned} \big ( \hat{\mathbb {W}}_n , \hat{\mathbb {W}}_n^{(1)},\ldots ,\hat{\mathbb {W}}_n^{(R)}\big ) \rightsquigarrow (\mathbb {W}, \mathbb {W}^{(1)},\ldots , \mathbb {W}^{(R)}) \end{aligned}$$
(66)

in \(C([0,1]^3)^{R+1}\) as \(n \rightarrow \infty \) where the process \(\hat{\mathbb {W}}_n\) is defined by (60), the bootstrap counterparts \(\hat{\mathbb {W}}_{n}^{(1)},\ldots ,\hat{\mathbb {W}}_{n}^{(R)}\) are defined by (36) and the random variables \(\mathbb {W}^{(1)},\ldots ,\mathbb {W}^{(R)}\) are independent copies of \(\mathbb {W}\) which is defined by its covariance operator (31).

If \(d_\infty = 0\), the continuous mapping theorem directly implies

$$\begin{aligned} \big ( \hat{\mathbb {M}}_n , \check{T}_{n}^{(1)},\ldots ,\check{T}_{n}^{(R)}\big ) {\mathop {\longrightarrow }\limits ^{\mathcal {D}}} (\check{T}, \check{T}^{(1)},\ldots , \check{T}^{(R)}) \end{aligned}$$

in \({\mathbb {R}}^{R+1}\) as \(n \rightarrow \infty \) where the statistic \(\hat{\mathbb {M}}_n\) is defined by (29), the bootstrap statistics \(\check{T}_{n}^{(1)},\ldots ,\check{T}_{n}^{(R)}\) are defined by (37) and the random variables \(\check{T}^{(1)},\ldots ,\check{T}^{(R)}\) are independent copies of the random variable \(\check{T}\) defined by (30). Now the same arguments as in the discussion starting from Eq. (54) imply the assertions of Theorem 4. \(\square \)

Proof of Theorem 5. We first mention that it follows by similar arguments as given in the proof of Theorem 4.2 in Dette et al. (2020a) that the estimator of the unknown change location defined by (35) satisfies

$$\begin{aligned} |\hat{s}-s^*| = O_{\mathbb {P}}(n^{-1}) \end{aligned}$$

whenever \(d_\infty >0\). Whenever \(d_\infty =0\), suppose that the estimate \(\hat{s}\) converges weakly to a \([\vartheta ,1-\vartheta ]\)-valued random variable which is denoted by \(s_{\max }\). Then, if \(d_\infty > 0\), the convergence in (33) and Slutsky’s theorem yield

$$\begin{aligned} \sqrt{n}\big ( \hat{d}_\infty - d_\infty \big ) {\mathop {\longrightarrow }\limits ^{\mathcal {D}}} D (\mathcal {E})= {\tilde{D} (\mathcal {E}) }/[{s^*(1-s^*)}] \, , \end{aligned}$$
(67)

where \(\tilde{D} (\mathcal {E})\) is the same as in (33) and the statistic \(\hat{d}_\infty \) is defined by (39).

The same arguments as in the proof of Theorem 3.6 in Dette et al. (2020a) again yield that the estimators of the extremal sets defined by (40) are consistent. The convergence in (66) and similar arguments as in the proof of Theorem 4.4 in the same reference then yield

$$\begin{aligned} \big ( \sqrt{n} ~ (\hat{d}_\infty - d_\infty ) ,~ \check{K}_{n}^{(1)},\ldots ,\check{K}_{n}^{(R)}\big ) {\mathop {\longrightarrow }\limits ^{\mathcal {D}}} (D(\mathcal {E}),~ D^{(1)}(\mathcal {E}),\ldots , D^{(R)}(\mathcal {E})) \end{aligned}$$
(68)

in \({\mathbb {R}}^{R+1}\) as \(n \rightarrow \infty \) where the bootstrap statistics \(\check{K}_{n}^{(1)},\ldots ,\check{K}_{n}^{(R)}\) are defined by (41) and the random variables \(D^{(1)}(\mathcal {E}),\ldots ,D^{(R)}(\mathcal {E})\) are independent copies of \(D(\mathcal {E})\) which is defined by (67). The convergence in the preceding equation holds true under the null and the alternative hypothesis and now the same arguments as in the discussion starting from Eq. (55) imply the assertions made in Theorem 5. \(\square \)

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dette, H., Kokot, K. Detecting relevant differences in the covariance operators of functional time series: a sup-norm approach. Ann Inst Stat Math 74, 195–231 (2022). https://doi.org/10.1007/s10463-021-00795-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10463-021-00795-2

Keywords

Navigation