Skip to main content
Log in

The effect of correlated errors on the performance of local linear estimation of regression function based on random functional design

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

This article considers the problem of nonparametric estimation of the regression function \(r\) in a functional regression model \(Y = r(X) +\varepsilon \) with a scalar response Y, a functional explanatory variable X, and a second order stationary error process \(\varepsilon \). Under some specific criteria, we construct a local linear kernel estimator of \(r\) from functional random design with correlated errors. The exact rates of convergence of mean squared error of the constructed estimator are established for both short and long range dependent error processes. Simulation studies are conducted on the performance of the proposed simple local linear estimator. Examples of time series data are considered.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  • Aneiros-Pérez G, Vieu P (2008) Nonparametric time series prediction: a semi-functional partial linear modeling. J Multivar Anal 99:834–857

    Article  MathSciNet  Google Scholar 

  • Aneiros-Pérez G, Cao R, Vilar-Fernández J (2011) Functional methods for time series prediction: a nonparametric approach. J Forecast 30:377–392

    Article  MathSciNet  Google Scholar 

  • Aneiros-Pérez G, Honová I, Husková M, Vieu P (2022) Special Issue on Functional data analysis and related fields. J Multivar Anal 189:104861

    Article  Google Scholar 

  • Baillo A, Grané A (2009) Locally modeled regression and functional data. J Multivar Anal 100:102–111

    Google Scholar 

  • Barrientos-Marin J, Ferraty F, Vieu P (2010) Locally modeled regression and function data. J Nonparam Stat 22:617–632

    Article  Google Scholar 

  • Benhenni K, Ferraty F, Rachdi M, Vieu P (2007) Local smoothing regression with functional data. Comput Stat 22:353–369

    Article  MathSciNet  Google Scholar 

  • Benhenni K, Hedli-Griche S, Rachdi M (2010) Estimation of the regression operator from functional fixed-design with correlated errors. J Multivar Anal 101:476–490

    Article  MathSciNet  Google Scholar 

  • Benhenni K, Hedli-Griche S, Rachdi M (2017) Regression models with correlated errors based on functional random design. TEST 26:1–21

    Article  MathSciNet  Google Scholar 

  • Benhenni K, Hajj-Hassan A, Su Y (2019) Local polynomial estimation of regression operators from functional data with correlated errors. J Multivar Anal 170:80–94

    Article  MathSciNet  Google Scholar 

  • Beran J (1992) Statistical method for data with long range dependence. J Stat Sci 7:404–420

  • Beran J (1994) Statistics for long-memory processes. Chapman & Hall, New York

    Google Scholar 

  • Berlinet A, Elamine A, Mas A (2011) Local linear regression for functional data. Ann Inst Stat Math 63:1047–1075

    Article  MathSciNet  Google Scholar 

  • Boj E, Delicado D, Fortiana J (2004) Distance-based local linear regression for functional predictor. Comput Stat Data Anal 54:429–437

    Article  MathSciNet  Google Scholar 

  • Cox DR (1984) Long-range dependent. In: David HA, David HT (eds) Statistics: an appraisal. Iowa State University Press, Iowa, pp 55–73

  • Demongeot J, Laksaci A, Madani F, Rachdi M (2010) Local linear estimation of the conditional density for functional data. C R Math Acad Sci Paris 348:931–934

    Article  MathSciNet  Google Scholar 

  • Fan J, Gijbels I (1996) Local polynomial modeling and its applications. Chapman & Hall, London

    Google Scholar 

  • Ferraty F (ed) (2011) Recent advances in functional data analysis and related topics. Physica-Verlag, Berlin

    Google Scholar 

  • Ferraty F, Nagy S (2021) Scalar-on-function local linear regression and beyond. Biometrika 00:1–17

    Google Scholar 

  • Ferraty F, Vieu P (2006) Nonparametric analysis for functional data: theory and practice. Springer, New York

    Google Scholar 

  • Ferraty F, Rabhi A, Vieu P (2005) Conditional quantiles for dependent functional data with application to the climatic El Niño phenomenon. Sankhya 67:378–398

    MathSciNet  Google Scholar 

  • Ferraty F, Mas A, Vieu P (2007) Nonparametric regression on functional data: inference and practical aspects. Aust N Z J Stat 49:267–286

    Article  MathSciNet  Google Scholar 

  • Horváth L, Kokoszka P (2012) Inference for functional data with applications. Springer, New York

    Book  Google Scholar 

  • Li WV, Shao QM (2001) Gaussian processes: inequalities, small ball probabilities and applications. In: Stochastic processes: theory and methods, Handbook of statistics, vol 19. North-Holland, Amsterdam

  • Masry E (2003) Local polynomial regression estimation with correlated errors. J Multivar Anal 86:330–359

    Article  Google Scholar 

  • Petersen A, Müler H-G (2019) Fréchet regression for random objects with Euckidean predictors. Ann Stat 49(2):691–719

    Google Scholar 

  • Rachdi M, Vieu P (2007) Nonparametric regression for functional data: automatic smoothing parameter selection. J Stat Plan Inference 137(9):2784–2801

    Article  MathSciNet  Google Scholar 

  • Ramsay JO, Silverman BW (2002) Applied functional data analysis: methods and case studies. Springer, New York

    Book  Google Scholar 

  • Ramsay JO, Silverman BW (2005) Functional data analysis. Springer, New York

    Book  Google Scholar 

  • Sarda P, Vieu P (2000) Kernel regression. In: Schimek M (ed) Smoothing and regression, approaches, computation and application. Wiley, New York, pp 43–70

    Chapter  Google Scholar 

  • Vilar-Fernández JM, Cao R (2007) Nonparametric forecasting in time series. A comparative study. Commun Stat Simul Comput 36:311–334

    Article  MathSciNet  Google Scholar 

  • Zeidler E (1986) Nonlinear functional analysis and its applications I: fixed-point theorems. Springer, New York

    Book  Google Scholar 

Download references

Acknowledgements

The authors of this article would like to express their gratitude to the three anonymous reviewers for their constructive critiques, suggestions, and comments that have led to improved version of the article. The authors are also grateful to the Editor-in-Chief for patiently handling the article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Karim Benhenni.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Proof of Lemma 1

Note that

$$\begin{aligned} {I\!\!E}\left\{ d^k(X_1,x) K_h^\nu (d(X_1,x))\right\} = \int _{0}^{h} d^k K_h^{\nu }(d) d\varphi (d) = h^{k} \int _{0}^{1} t^k K^{\nu }(t) d\varphi (ht) \end{aligned}$$

The second equality follows from the change of variable \(t=d/h\). Noticing the definition of \(\tau _h(s)\) and \(\tau _h(1)=1\) and using integration by parts we then have

$$\begin{aligned} \int _{0}^{1} t^k K^{\nu }(t) d\varphi (ht) = \varphi (h) \left( K^{\nu }(1) - \int _{0}^1(s\,K(s))' \tau _h(s) ds\right) = \varphi (h) M_{k,\nu }(h) \end{aligned}$$

Thus

$$\begin{aligned} {I\!\!E}\left\{ d^k(X_1,x) K_h^\nu (d(X_1,x)\right\} = h^{k} \varphi (h) M_{k,\nu }(h) = h^{k} \varphi (h) M_{k,\nu } (1+o(1)) \end{aligned}$$

This completes the proof of Lemma 1. \(\square \)

Proof of Theorem 1

Recall that for random variables U and V the following identity holds

$$\begin{aligned}{} & {} {I\!\!E}(U/V) = {I\!\!E}(U)/{I\!\!E}(V) - {I\!\!E}[V(U-{I\!\!E}(U))]/{I\!\!E}^2(V)\\{} & {} + {I\!\!E}[(U/V)(V-{I\!\!E}(V))^2]/{I\!\!E}^2(V) \end{aligned}$$

from which with \(U={{\widehat{r}}}_{\scriptscriptstyle h,2}(x)\) and \(V={{\widehat{r}}}_{\scriptscriptstyle h,1}(x)\), the following classical decomposition of the bias term \(B_n\) of the estimator \({{\widehat{r}}}_{\scriptscriptstyle h}(x)\) follows

$$\begin{aligned} B_n= & {} \frac{{I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,2}(x))}{{I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,1}(x))} -\frac{B_{n,1}}{{I\!\!E}^2({{\widehat{r}}}_{\scriptscriptstyle h,1}(x))}+\frac{B_{n,2}}{{I\!\!E}^2({{\widehat{r}}}_{\scriptscriptstyle h,1}(x))} - r(x) \end{aligned}$$

where \({B_{n,1}}={I\!\!E}\left[ {{\widehat{r}}}_{\scriptscriptstyle h,1}(x))({{\widehat{r}}}_{\scriptscriptstyle h,2}(x)-{I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,2}(x))\right] \) and \({B_{n,2}}={I\!\!E}\Big [{{\widehat{r}}}_{\scriptscriptstyle h}(x)\Big ({{\widehat{r}}}_{\scriptscriptstyle h,1}(x)-{I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,1}(x))\Big )^2\Big ]\).

From the independence assumption of X and \(\varepsilon \) we have

$$\begin{aligned} \frac{{I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,2}(x))}{{I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,1}(x))} - r(x)= & {} \frac{\sum ^{n}_{i=1} {I\!\!E}\left[ (Y_i-r(x))\,\omega _i(x))\right] }{\sum ^{n}_{i=1}{I\!\!E}(\omega _i(x)))}\\= & {} \frac{\sum ^{n}_{i=1} {I\!\!E}\left[ \phi (d(X_i,x))\,\omega _i(x))\right] }{\sum ^{n}_{i=1}{I\!\!E}(\omega _i(x)))} {\mathop {=}\limits ^{\Delta }} \frac{N}{D} \end{aligned}$$

Write \(\omega _i(x)= \sum ^{n}_{j=1} \omega _{j,i}(x)\), where

$$\begin{aligned} \omega _{j,i}(x)= d(X_j,x)\left[ d(X_j,x) - d(X_i,x)\right] K_h(d(X_i,x))K_h(d(X_j,x)) \end{aligned}$$
(10)

with \(\omega _{i,i}(x)=0\). Then \({I\!\!E}(\omega _i(x))= \sum ^{n}_{j=1}{I\!\!E}(\omega _{j,i}(x))\). Using the assumption that \(X_i's\) are independent as in hypothesis (8), and denoting \(d_j = d(X_j,x)\) to save space, we have

$$\begin{aligned} {I\!\!E}(\omega _{j,i}(x))={I\!\!E}\left[ d_j^2 K_h(d_j)\right] {I\!\!E}\left[ K_h(d_i)\right] - {I\!\!E}\left[ d_i K_h(d_i)\right] {I\!\!E}\left[ d_j K_h(d_j)\right] \end{aligned}$$

From Lemma 1, with \(k=1, 2\), and \(\nu =1\), respectively, we have

$$\begin{aligned} {I\!\!E}(\omega _{j,i}(x))= h^2\varphi ^2(h)(M_{2,1}M_{0,1} - M_{1,1}^2) (1 + o(1)). \end{aligned}$$
(11)

Therefore, asymptotically, the denominator D becomes

$$\begin{aligned} D= \sum ^{n}_{i=1}{I\!\!E}(\omega _i(x)) = n(n-1) h^2\varphi ^2(h)(M_{2,1}M_{0,1} - M_{1,1}^2) (1 + o(1)). \end{aligned}$$

For the numerator term N, we notice that

$$\begin{aligned}{} & {} {I\!\!E}\left[ \phi (d(X_i,x))\omega _{j,i}(x)\right] \\{} & {} = {I\!\!E}\left[ d_j^2 K_h(d_j)\right] {I\!\!E}\left[ \phi (d_i) K_h(d_i)\right] - {I\!\!E}\left[ d_i \phi (d_i) K_h(d_i)\right] {I\!\!E}\left[ d_j K_h(d_j)\right] \end{aligned}$$

Using Taylor expansion of \(\phi \) at zero (noting \(\phi (0)=0\)) up to order two and hypotheses (3) and (7), we have

$$\begin{aligned} E\left[ \phi (d_i)K_h(d_i)\right]= & {} \int _{0}^{h} \phi (d)K_h(d) d\varphi (d) = \int _{0}^{1} \phi (ht) K(t) d\varphi (ht)\\= & {} \phi '(0)h\varphi (h)\int _{0}^{1} t K(t) d\tau _{h}(t) \\ {}{} & {} + \frac{h^2}{2} \phi ^{\prime \prime }(0)\varphi (h) \int _{0}^{1} t^2 K(t) d\tau _{h}(t) (1+ o(1)) \\= & {} \phi '(0)h\varphi (h)\int _{0}^{1} t K(t) d\tau _{0}(t) \\ {}{} & {} + \frac{h^2}{2} \phi ^{\prime \prime }(0)\varphi (h) \int _{0}^{1} t^2 K(t) d\tau _{0}(t) (1+ o(1))\\= & {} \phi '(0)h\varphi (h)M_{1,1} + \frac{h^2}{2} \phi ^{\prime \prime }(0)\varphi (h) M_{2,1} (1+ o(1)) \end{aligned}$$

and similarly,

$$\begin{aligned} E\left[ d_i \phi (d_i)K_h(d_i)\right]= & {} \phi '(0)h \varphi (h)M_{2,1} + \frac{h^2}{2} \phi ^{\prime \prime }(0)\varphi (h) M_{3,1} (1+ o(1)) \end{aligned}$$
(12)

which together with Lemma 1 yield

$$\begin{aligned} {I\!\!E}\left[ \phi (d(Xi,x)\omega _{j,i}(x)\right]= & {} \frac{\phi ^{\prime \prime }(0)}{2}h^4 \varphi ^2(h) (M_{2,1}^2 - M_{1,1}M_{3,1}) (1 + o(1)) \end{aligned}$$

and thus the numerator N has the following asymptotic expression

$$\begin{aligned} N= & {} \sum ^{n}_{i=1}{I\!\!E}\left[ \phi (d(X_i,x)\,\omega _i(x))\right] \\ {}= & {} n(n-1) h^4 \varphi ^2(h) \frac{\phi ^{\prime \prime }(0)}{2} (M_{2,1}^2 - M_{1,1}M_{3,1}) (1 + o(1)). \end{aligned}$$

Therefore

$$\begin{aligned} \frac{{I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,2}(x))}{{I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,1}(x))} - r(x) = h^2 \frac{\phi ^{\prime \prime }(0)}{2}\frac{M_{2,1}^2 - M_{1,1}M_{3,1}}{M_{2,1}M_{0,1} - M_{1,1}^2} (1 + o(1)). \end{aligned}$$

The rate of convergence of the term \(B_{n,1}\) follows from assertion (iii) of Lemma 3:

$$\begin{aligned} B_{n,1}= Cov({{\widehat{r}}}_{\scriptscriptstyle h,1}(x), {{\widehat{r}}}_{\scriptscriptstyle h,2}(x))= O\left( \frac{1}{n\varphi (h)}\right) . \end{aligned}$$

For the term \(B_{n,2}\), by Cauchy-Shwartz inequality, we have

$$\begin{aligned} B_{n,2} \le \sqrt{Var({{\widehat{r}}}_{\scriptscriptstyle h}(x)) Var({{\widehat{r}}}_{\scriptscriptstyle h,1}(x))} \end{aligned}$$

and using (i) and (ii) of Lemma 3, it follows that

$$\begin{aligned} B_{n,2}= O\left( \frac{1}{n\varphi (h)}\right) . \end{aligned}$$

Therefore, \(B_{n,1}\) and \(B_{n,2}\) are asymptotically negligible with respect to \({I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,2}(x))/{I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,1}(x))\). \(\square \)

Proof of Theorem 2

We use the following decomposition of the variance as in, for instance, Sarda and Vieu (2000) in the finite dimensional case and by Ferraty et al. (2007) in the functional framework.

$$\begin{aligned} Var({{\widehat{r}}}_{\scriptscriptstyle h}(x))= & {} \frac{Var({{\widehat{r}}}_{\scriptscriptstyle h,2}(x))}{{I\!\!E}^2({{\widehat{r}}}_{\scriptscriptstyle h,1}(x))}-4\, \frac{{I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,2}(x))\; Cov({{\widehat{r}}}_{\scriptscriptstyle h,1}(x), {{\widehat{r}}}_{\scriptscriptstyle h,2}(x))}{{I\!\!E}^3({{\widehat{r}}}_{\scriptscriptstyle h,1}(x))}\\{} & {} +3\, Var({{\widehat{r}}}_{\scriptscriptstyle h,1}(x))\, \frac{{I\!\!E}^2({{\widehat{r}}}_{\scriptscriptstyle h,2}(x))}{{I\!\!E}^4({{\widehat{r}}}_{\scriptscriptstyle h,1}(x))}+o\left( \frac{1}{n\varphi (h)}\right) . \end{aligned}$$

The asymptotic expression of the variance is obtained from Lemmas 2 and 3:

$$\begin{aligned} Var({{\widehat{r}}}_{\scriptscriptstyle h}(x))= & {} \frac{1}{n\varphi (h)}\left( r^2(x)\frac{\psi _2}{\psi _0^2} + \rho _\epsilon (0)\frac{\psi _2}{\psi _0^2} - 4r^2(x)\frac{\psi _2}{\psi _0^2} + 3r^2(x)\frac{(\psi _2 + \psi _3)}{\psi _0^2} \right) \\{} & {} + \frac{{\tilde{s}}_n}{n^2} + o\left( \frac{1}{n\varphi (h)}\right) \\= & {} \frac{1}{n \varphi (h)} \left( \frac{3\psi _3}{\psi _0 ^2} r^2(x) + \frac{\rho _\epsilon (0)\psi _2}{\psi _0 ^2} \right) + \frac{{\tilde{s}}_n}{n^2} + o\left( \frac{1}{n\varphi (h)}\right) . \end{aligned}$$

This completes the proof of Theorem 2. It remains to prove Lemmas 2 and 3. \(\square \)

Proof of Lemma 2

The asymptotic expression of

$$\begin{aligned} {I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,1}(x))= \frac{1}{n(n-1)h^2\varphi ^2(h)}\sum _{i=1}^{n}\omega _i(x) \end{aligned}$$

follows from the proof of Theorem 1. Concentrating on \({I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,2}(x))\), we have

$$\begin{aligned} {I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,2}(x)=\frac{1}{n(n-1)h^2\varphi ^2(h)} \sum ^{n}_{i=1}\sum _{j\ne i}{I\!\!E}\left[ \omega _{j,i}(x))Y_i\right] , \end{aligned}$$

where \(\omega _{j,i}(x)\) is defined in (10), and since the \(X_i's\) are assumed independent, we have

$$\begin{aligned} {I\!\!E}\left[ \omega _{j,i}(x))Y_i\right] = {I\!\!E}\left[ {I\!\!E}(Y_i/X_i)\omega _{j,i}(x) \right] =(r(x) +o(1)) {I\!\!E}(\omega _{j,i}(x)). \end{aligned}$$

From (11) in the proof of Theorem 1, we have

$$\begin{aligned} {I\!\!E}(\omega _{j,i}(x))= h^2 \varphi ^2(h)(M_{2,1}M_{0,1} - M_{1,1}^2) (1 + o(1)) \end{aligned}$$

so that

$$\begin{aligned} {I\!\!E}\left[ \omega _{j,i}(x))Y_i\right] =(r(x) + o(1)) h^2 \varphi ^2(h)\psi _0 (1 + o(1)) \end{aligned}$$

and hence,

$$\begin{aligned} {I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,2}(x)= r(x)\psi _0 (1 + o(1)). \end{aligned}$$

\(\square \)

Proof of Lemma 3

Proof of (i). From the following definition of the variance

$$\begin{aligned} Var({{\widehat{r}}}_{\scriptscriptstyle h,1}(x))= {I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,1}^2(x)) - {I\!\!E}^2({{\widehat{r}}}_{\scriptscriptstyle h,1}(x)) \end{aligned}$$

and the expression of \({{\widehat{r}}}_{\scriptscriptstyle h,1}(x)\), we can write

$$\begin{aligned} {I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,1}^2(x))= & {} \frac{1}{n^2(n-1)^2 h^4\varphi ^4(h)} \sum _{i=1}^{n}{I\!\!E}(\omega ^2_i(x))\\ {}{} & {} + \frac{1}{n^2(n-1)^2 h^4\varphi ^4(h)} \sum _{\begin{array}{c} 1\le i_1,i_2 \le n \\ i_1\ne i_2 \end{array}} {I\!\!E}(\omega _{i_1}(x)\omega _{i_2}(x)). \end{aligned}$$

Now,

$$\begin{aligned} {I\!\!E}(\omega ^2_i(x))= \sum _{j_1=1}^{n}{I\!\!E}(\omega ^2_{j_1, i}(x)) + \sum _{\begin{array}{c} 1\le j_1,j_2 \le n \\ j_1\ne j_2 \ne i \end{array}} {I\!\!E}(\omega _{j_1,i}(x)\omega _{j_2,i}(x)). \end{aligned}$$

Likewise from hypothesis (8), we have

$$\begin{aligned} {I\!\!E}(\omega ^2_{j, i}(x))= & {} {I\!\!E}\left[ K^2_h(d_i)\right] {I\!\!E}(d^4_j) - 2{I\!\!E}\left[ d_i K^2_h(d_i)\right] {I\!\!E}\left[ d^3_j K_h(d_j)\right] \\ {}{} & {} + {I\!\!E}^2\left[ d^2_i K^2_h(d_i)\right] . \end{aligned}$$

From Lemma 1, we obtain

$$\begin{aligned} {I\!\!E}(\omega ^2_{j, i}(x))= h^4 \varphi ^2(h) \left( M_{0,2}M_{4,0}- 2 M_{1,2}M_{3,1} + M^2_{2,2} \right) (1 + o(1)). \end{aligned}$$

Now, we compute \({I\!\!E}(\omega _{j_1,i}(x)\omega _{j_2,i}(x))\) for \(j_1 \ne j_2 \ne i\):

$$\begin{aligned} {I\!\!E}(\omega _{j_1,i}(x)\omega _{j_2,i}(x))= & {} {I\!\!E}[d^2_{j_1}K_h(d_{j_1})]{I\!\!E}[d^2_{j_2}K_h(d_{j_2})]{I\!\!E}[K^2_h(d_i)] \\{} & {} - {I\!\!E}[d^2_{j_1}K_h(d_{j_1})]{I\!\!E}[d_{j_2}K_h(d_{j_2})]{I\!\!E}[d_iK^2_h(d_i)]\\{} & {} - {I\!\!E}[d_{j_1}K_h(d_{j_1})]{I\!\!E}[d_{j_2}K_h(d_{j_2})]{I\!\!E}[d_iK^2_h(d_i)] \\{} & {} + {I\!\!E}[d_{j_1}K_h(d_{j_1})]{I\!\!E}[d_{j_2}K_h(d_{j_2})]{I\!\!E}[d^2_i K^2_h(d_i)]. \end{aligned}$$

Again, using Lemma 1, we obtain

$$\begin{aligned} {I\!\!E}(\omega _{j_1,i}(x)\omega _{j_2,i}(x))= & {} h^4 \varphi ^3(h) \left( M_{0,2}M^2_{2,1}- 2 M_{1,2}M_{2,1}M_{1,1} + M_{2,2}M^2_{1,1} \right) (1 + o(1))\\= & {} h^4 \varphi ^3(h) \psi _2 (1 + o(1)). \end{aligned}$$

It follows that

$$\begin{aligned} {I\!\!E}(\omega ^2_i(x)) = \left( (n-1) h^4 \varphi ^2(h) \psi _1 + (n-1)(n-2) h^4 \varphi ^3(h) \psi _2\right) (1 + o(1)) \end{aligned}$$
(13)

where \(\psi _1= M_{0,2}M_{4,0}-2M_{1,2}M_{3,1}+ M^2_{2,2}\) and \(\psi _2\) is defined in Theorem 2.

Concentrating on the second term of \({I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,1}^2(x))\), we have for \(i_1 \ne i_2\)

$$\begin{aligned} {I\!\!E}(\omega _{i_1}(x)\omega _{i_2}(x))= \sum _{j_1\ne i_1}{I\!\!E}(\omega _{j_1,i_1}(x)\omega _{j_1,i_2}(x)) + \sum _{\begin{array}{c} 1\le j_1,j_2 \le n \\ j_1\ne j_2\ne i_1 \ne i_2 \end{array}} {I\!\!E}(\omega _{j_1,i_1}(x)\omega _{j_2,i_2}(x)). \end{aligned}$$

We have for \(j_1 \ne i_1\),

$$\begin{aligned} {I\!\!E}(\omega _{j_1,i_1}(x)\omega _{j_1,i_2}(x))= & {} {I\!\!E}[d^4_{j_1}K^2_h(d_{j_1})] {I\!\!E}[K_h(d_{i_1})]{I\!\!E}[K_h(d_{i_2})] \\{} & {} - 2 {I\!\!E}[d^3_{j_1}K^2_h(d_{j_1})] {I\!\!E}[d_{i_1}K_h(d_{i_1})] {I\!\!E}[K_h(d_{i_2})] \\{} & {} +{I\!\!E}[d^2_{i_1}K_h(d_{i_1})]{I\!\!E}[d^2_{j_1}K^2_h(d_{j_1})]{I\!\!E}[K_h(d_{i_2})]. \end{aligned}$$

Then from Lemma 1, we obtain

$$\begin{aligned} {I\!\!E}(\omega _{j_1,i_1}(x)\omega _{j_1,i_2}(x)) = h^4 \varphi ^3(h) \psi _3 (1+o(1)) \end{aligned}$$

where \(\psi _3\) is defined in Theorem 2.

For \(j_1 \ne j_2\) (\(\ne i_1,i_2\)),

$$\begin{aligned} {I\!\!E}(\omega _{j_1,i_1}(x)\omega _{j_2,i_2}(x))= & {} {I\!\!E}[d^2_{j_1}K_h(d_{j_1})]{I\!\!E}[d^2_{j_2}K_h(d_{j_2})]{I\!\!E}[K_h(d_{i_1})]{I\!\!E}[K_h(d_{i_2})] \\{} & {} - {I\!\!E}[K_h(d_{i_1})]{I\!\!E}[d^2_{j_1}K_h(d_{j_1})]{I\!\!E}[d_{i_2}K_h(d_{i_2})]{I\!\!E}[d_{j_2}K_h(d_{j_2})] \\{} & {} - {I\!\!E}[d_{i_1}K_h(d_{i_1})]{I\!\!E}[K_h(d_{i_2})]{I\!\!E}[d_{j_1}K_h(d_{j_1})]{I\!\!E}[d^2_{j_2}K_h(d_{j_2})] \\{} & {} +{I\!\!E}[d_{i_1}K_h(d_{i_1})]{I\!\!E}[d_{i_2}K_h(d_{i_2})]{I\!\!E}[d_{j_1}K_h(d_{j_1})]{I\!\!E}[d_{j_2}K_h(d_{j_2})] \end{aligned}$$

so that from Lemma 1,

$$\begin{aligned} {I\!\!E}(\omega _{j_1,i_1}(x)\omega _{j_2,i_2}(x))= h^4 \varphi ^4(h) \psi _4 (1+o(1)) \end{aligned}$$

where \(\psi _4= M^2_{2,1}M^2_{0,1} -2M_{2,1}M^2_{1,1}M_{0,1} +M^4_{1,1}\). Therefore, we obtain

$$\begin{aligned} {I\!\!E}(\omega _{i_1}(x)\omega _{i_2}(x))= \left( (n-1)h^4 \varphi ^3(h) \psi _3 + (n-1)(n-2)h^4 \varphi ^4(h) \psi _4 \right) (1+o(1))\nonumber \\ \end{aligned}$$
(14)

Gathering the terms (13) and (14), we obtain

$$\begin{aligned} {I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,1}^2(x))= \left( \psi _4 + \frac{1}{n^2\varphi ^2(h)}\psi _1 + \frac{1}{n\varphi (h)}(\psi _2 +\psi _3) \right) (1+o(1)) \end{aligned}$$

and noticing that \(\psi _4 = \psi ^2_0\), we finally obtain the asymptotic expression of the variance:

$$\begin{aligned} Var({{\widehat{r}}}_{\scriptscriptstyle h,1}(x))= \frac{1}{n\varphi (h)}(\psi _2 +\psi _3) (1+o(1)). \end{aligned}$$

Proof of (ii). Using the following definition of the variance

$$\begin{aligned} Var({{\widehat{r}}}_{\scriptscriptstyle h,2}(x))= {I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,2}^2(x)) - {I\!\!E}^2({{\widehat{r}}}_{\scriptscriptstyle h,2}(x)) \end{aligned}$$

and the expression of \({{\widehat{r}}}_{\scriptscriptstyle h,2}(x)\), we write

$$\begin{aligned} {I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,2}^2(x))= & {} \frac{1}{n^2(n-1)^2 h^4\varphi ^4(h)} \sum _{i=1}^{n}{I\!\!E}\left[ \omega ^2_i(x)Y^2_i\right] \\{} & {} + \frac{1}{n^2(n-1)^2 h^4\varphi ^4(h)} \sum _{\begin{array}{c} 1\le i_1,i_2 \le n \\ i_1\ne i_2 \end{array}} {I\!\!E}\left[ \omega _{i_1}(x)\omega _{i_2}(x))Y_{i_1}Y_{i_2}\right] . \end{aligned}$$

Now, for the first term of \({I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,2}^2(x))\), we have

$$\begin{aligned}{} & {} {I\!\!E}\left[ \omega ^2_i(x)Y^2_i) = {I\!\!E}(\omega ^2_i(x)r^2(X_i)\right] + \rho _\epsilon (0) {I\!\!E}(\omega ^2_i(x)) = (r^2(x)+o(1) \\ {}{} & {} \quad + \rho _\epsilon (0)){I\!\!E}(\omega ^2_i(x)) \end{aligned}$$

and with (13), we obtain

$$\begin{aligned}{} & {} {I\!\!E}\left[ \omega ^2_i(x)Y^2_i\right] = (r^2(x)+o(1) + \rho _\epsilon (0))(n-1) h^4 \varphi ^2(h) \psi _1 \\ {}{} & {} \quad + (n-1)(n-2) h^4 \varphi ^3(h) \psi _2. \end{aligned}$$

For the second term of \({I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,2}^2(x))\), for \(i_1\ne i_2\), since \(X_i's\) and \(\epsilon _i's\) are independent, we have

$$\begin{aligned} {I\!\!E}\left[ \omega _{i_1}(x)\omega _{i_2}(x) Y_{i_1}Y_{i_2}\right]= & {} {I\!\!E}\left[ \omega _{i_1}(x)\omega _{i_2}(x)r(X_{i_1})r(X_{i_2})\right] \\{} & {} + \rho _\epsilon (i_1-i_2) {I\!\!E}(\omega _{i_1}(x)\omega _{i_2}(x)) \end{aligned}$$

so that from (14), we have

$$\begin{aligned} \sum _{\begin{array}{c} 1\le i_1,i_2 \le n \\ i_1\ne i_2 \end{array}} {I\!\!E}\left[ \omega _{i_1}(x)\omega _{i_2}(x)Y_{i_1}Y_{i_2}\right]= & {} (r^2(x)+o(1)) n(n-1)^2 h^4 \varphi ^3(h) \\ {}{} & {} \times \left( \psi _3 + (n-2)\varphi (h) \psi _4 \right) \\{} & {} + (n-1) h^4 \varphi ^3(h) \left( \psi _3+(n-2)\varphi (h) \psi _4 \right) \\ {}{} & {} \times \sum ^{n}_{i_1=2}\sum ^{i_1-1}_{i_2=1}\rho _\epsilon (i_1-i_2). \end{aligned}$$

Collecting the above two terms, we obtain

$$\begin{aligned} {I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,2}^2(x))= & {} (r^2(x) + \rho _\epsilon (0))\frac{1}{n^2\varphi ^2(h)}\left( \psi _1 + n\varphi (h)\psi _2\right) \\{} & {} + r^2(x)\frac{1}{n\varphi (h)}\left( \psi _3 + n\varphi (h)\psi _4\right) \\{} & {} + \frac{1}{n^3\varphi (h)}\left( \psi _3 + n\varphi (h)\psi _4\right) {\tilde{s}}_n + o\left( \frac{1}{n\varphi (h)}\right) . \end{aligned}$$

Since \(n\varphi (h)\rightarrow \infty \) and \(\psi _4=\psi ^2_0\), it follows that

$$\begin{aligned} Var({{\widehat{r}}}_{\scriptscriptstyle h,2}^2(x))= \frac{1}{n\varphi (h)}(r^2(x) + \rho _\epsilon (0))\psi _2 + \frac{{\tilde{s}}_n}{n^2}\psi _4 + o\left( \frac{1}{n\varphi (h)}\right) . \end{aligned}$$

This completes the proof of assertion (ii).

Proof of (iii). We write

$$\begin{aligned} Cov({{\widehat{r}}}_{\scriptscriptstyle h,1}(x),{{\widehat{r}}}_{\scriptscriptstyle h,2}(x))= {I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,1}(x){{\widehat{r}}}_{\scriptscriptstyle h,2}(x)) - {I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,1}(x)){I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,2}(x)) \end{aligned}$$

with

$$\begin{aligned} {I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,1}(x){{\widehat{r}}}_{\scriptscriptstyle h,2}(x))= & {} \frac{1}{n^2(n-1)^2 h^4\varphi ^4(h)} \sum _{i=1}^{n}{I\!\!E}\left[ \omega ^2_i(x)Y_i\right] \\{} & {} + \frac{1}{n^2(n-1)^2 h^4\varphi ^4(h)} \sum _{\begin{array}{c} 1\le i_1,i_2 \le n \\ i_1\ne i_2 \end{array}} {I\!\!E}\left[ \omega _{i_1}(x)\omega _{i_2}(x))Y_{i_2}\right] . \end{aligned}$$

The same techniques as in the proof of (ii) are applied to obtain

$$\begin{aligned} {I\!\!E}\left[ \omega ^2_i(x)Y_i)\right] = \left( r(x)+o(1) \right) (n-1) h^4 \varphi ^2(h) \left( \psi _1 + (n-2) \varphi (h) \psi _2\right) \end{aligned}$$

and

$$\begin{aligned} {I\!\!E}\left[ \omega _{i_1}(x)\omega _{i_2}(x) Y_{i_2}\right] = (r(x)+o(1)) (n-1) h^4 \varphi ^3(h) \left( \psi _3 + (n-2)\varphi (h) \psi _4 \right) . \end{aligned}$$

It follows that

$$\begin{aligned} {I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,1}(x){{\widehat{r}}}_{\scriptscriptstyle h,2}(x))= & {} \frac{1}{n(n-1)\varphi ^2(h)} r(x)\left( \psi _3 + (n-2) \varphi (h) \psi _2\right) \nonumber \\{} & {} + \frac{1}{n\varphi (h)} r(x)\left( \psi _3 + (n-2) \varphi (h) \psi _4 \right) + o\left( \frac{1}{n\varphi (h)}\right) . \end{aligned}$$
(15)

Therefore using the asymptotic expressions of \({I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,1}(x)\) and \({I\!\!E}({{\widehat{r}}}_{\scriptscriptstyle h,2}(x)\) as in assertion (i) and (15), we obtain

$$\begin{aligned} Cov({{\widehat{r}}}_{\scriptscriptstyle h,1}(x),{{\widehat{r}}}_{\scriptscriptstyle h,2}(x))= & {} \frac{1}{n\varphi (h)}\psi _2 r(x) + \psi _4 r(x) - \psi ^2_0 r(x) +o\left( \frac{1}{n\varphi (h)}\right) \\= & {} \frac{1}{n\varphi (h)}\psi _2 r(x) (1 + o(1)). \end{aligned}$$

This completes the proof of Lemma 3. \(\square \)

Proof of Corollary 2

In Corollary 1, we replace the autocovariance function \(\rho _\epsilon (j)\) by \(\mathcal {C}\,|j|^{-\gamma }\) and using that \( \sum _{j=1}^n j^{-\gamma }= O(n^{1-\gamma })\), we have \(\frac{{\tilde{s}}_n}{n^2}\approx \frac{s_n}{n}= O(n^{-\gamma })\), where \(s_n=\sum ^{n}_{k=1}\left| \rho _\epsilon (k)\right| \). We deduce

$$\begin{aligned} MSE({{\widehat{r}}}_{\scriptscriptstyle h}(x))= & {} C^2 \phi ^{\prime \prime }(0)^2 h^4 + \frac{1}{n \varphi (h)} \left( \frac{3\psi _3}{\psi _0 ^2} r^2(x) + \frac{\rho _\epsilon (0)\psi _2}{\psi _0 ^2} \right) + O\left( \frac{1}{n^\gamma }\right) +o(h^4) \\= & {} C^2 \phi ^{\prime \prime }(0)^2 h^4 + \frac{1}{n^\gamma } \left\{ \frac{1}{n^{1-\gamma } \varphi (h)} \left( \frac{3\psi _3}{\psi _0 ^2} r^2(x) + \frac{\rho _\epsilon (0)\psi _2}{\psi _0 ^2} \right) + O(1)\right\} \\ {}{} & {} + o(h^4) \end{aligned}$$

and if \(n^{1-\gamma } \varphi (h) \rightarrow \infty \), then

$$\begin{aligned} MSE({{\widehat{r}}}_{\scriptscriptstyle h}(x)) = C^2 \phi ^{\prime \prime }(0)^2 h^4 + O\left( \frac{1}{n^\gamma }\right) + o(h^4). \end{aligned}$$

This completes the proof of Corollary 2. \(\square \)

Proof of Propositions 1 and 2

It suffices to choose the bandwidth that minimizes the asymptotic MSE of Corollary 1

$$\begin{aligned} h^*=\frac{1}{n^{1/(\tau +4)}} \end{aligned}$$

and to use the expression of the small probability \(\varphi (h)\sim Ch^\tau \).

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Benhenni, K., Hajj Hassan, A. & Su, Y. The effect of correlated errors on the performance of local linear estimation of regression function based on random functional design. Stat Papers (2024). https://doi.org/10.1007/s00362-023-01523-z

Download citation

  • Received:

  • Revised:

  • Published:

  • DOI: https://doi.org/10.1007/s00362-023-01523-z

Keywords

Navigation