Skip to main content
Log in

AR(1) model with skew-normal innovations

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

In this paper, we consider an autoregressive model of order one with skew-normal innovations. We propose several methods for estimating the parameters of the model and derive the limiting distributions of the estimators. Then, we study some statistical properties and the regression behavior of the proposed model. Finally, we provide a Monte Carlo simulation study for comparing performance of estimators and consider a real time series to illustrate the applicability of the proposed model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  • Azzalini A (1985) A class of distribution which includes the normal ones. Scand J Stat 12:171–178

    MathSciNet  MATH  Google Scholar 

  • Bondon P (2009) Estimation of autoregressive models with epsilon-skew-normal innovations. J Multivariate Anal 100:1761–1776

    Article  MathSciNet  MATH  Google Scholar 

  • Box GEP, Jenkins GM (1976) Time series analysis: forecasting and control. Holden Day, San Francisco

    MATH  Google Scholar 

  • Brockwell PJ, Davis RA (1991) Time series: theory and methods, 2nd edn. Springer, New York

    Book  MATH  Google Scholar 

  • Charalambides Ch A, Koutras Markovs V, Balakrishnan N (2001) Probability and statistical models with applications. Chapman and Hall, London

    Google Scholar 

  • Henze N (1986) A probabilistic representation of the skew-normal distribution. Scand J Stat 13:271–275

    MathSciNet  MATH  Google Scholar 

  • Jacobs PA, Lewis PAW (1977) A mixed autoregressive-moving average exponential sequence and point process, EARMA (1,1). Adv Appl Probability 9:87–104

    MathSciNet  MATH  Google Scholar 

  • Klimko LA, Nelson PI (1978) On conditonal least squares estimation for stochastic processes. Ann Stat 6:629–642

    Article  MathSciNet  MATH  Google Scholar 

  • Lawrance AJ, Lewis PAW (1985) Modelling and residual analysis of nonlinear autoregressive time series in exponential variables. J Roy Stat Soc B 47:165–183

    MathSciNet  MATH  Google Scholar 

  • Pourahmadi M (1988) Stationarity of the solution of \( X_{t}=\) \(A_{t}X_{t-1}+\varepsilon _{t}\) and analysis of non-Gaussian dependent random variables. J Time Ser Anal 9:225–239

    Article  MathSciNet  Google Scholar 

  • Pourahmadi M (2001) Foundation of time series analysis and prediction theory. Wiley, New York

    MATH  Google Scholar 

  • Searle SR (1971) Linear models. John Wiley and Sons, New York

    MATH  Google Scholar 

  • Stout WF (1974) Almost sure convergence. Academic Press, New York

    MATH  Google Scholar 

  • Tarami B, Purahmadi M (2003) Multi-variate t autoregression: Innovations, predictions variances and exact likelihood equation. J Time Ser Anal 24:739–754

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgments

The authors are grateful to the editor and an associate editor for their valuable encouraging comments and suggestions. This work was supported by the Research Council of Shiraz University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. Sharafi.

Appendix

Appendix

Proof of Theorem 2

Consider the log-likelihood \(L_{n}(\varvec{\theta })\) in (14), where for all \(\varvec{x}\in \mathbb {R}^{2}\),

$$\begin{aligned} l(\varvec{x};\varvec{\theta })=- \frac{1}{2}\ln (\sigma ^{2})- \frac{y^{2}}{2\sigma ^{2}}+\ln \varPhi \left( \frac{\lambda y}{\sigma }\right) +c, \end{aligned}$$
(26)

with \(y=x_{0}-\varphi x_{1}-\mu \) and c is a constant. Similar to Bondon’s approach, the basic technique of this proof is to control the behaviour of the first and second order in a Taylor expansion of \(L_{n}(\varvec{\theta })\) about \(\varvec{\theta }_{0}\). In some neighborhood \(S_{0}\) of \(\varvec{\theta }_{0}\) for almost all \(\varvec{x}\in \mathbb {R}^{2}\), the function \(l(\varvec{x};\varvec{\theta } )\) is twice continuously differentiable with respect to \(\varvec{\theta } \). Then, for \(\varepsilon >0, \left\| \varvec{\theta } -\varvec{\theta }_{0}\right\| <\varepsilon \), the Taylor expansion of \(L_{n}(\varvec{\theta })\) about \({\varvec{\theta }}_{0}\) is

$$\begin{aligned} L_{n}(\varvec{\theta })= & {} L_{n}(\varvec{\theta }_{0})+\left( \varvec{\theta } -\varvec{\theta }_{0}\right) ^{\top }\frac{\partial }{\partial \varvec{\theta }}L_{n}(\varvec{\theta }_{0})+ \frac{1}{2}(\varvec{\theta } -\varvec{\theta }_{0})^{\top }V_{n} (\varvec{\theta } -\varvec{\theta }_{0})\nonumber \\&+ \frac{1}{2}(\varvec{\theta } -\varvec{\theta }_{0})^{\top }T_{n}(\varvec{\theta }^{*})(\varvec{\theta } -\varvec{\theta }_{0}), \end{aligned}$$

where

$$\begin{aligned} V_{n}=\frac{\partial ^{2}}{\partial \varvec{\theta } \partial {\varvec{\theta }}^\top } L_{n}(\varvec{\theta }_{0}), \, T_{n}(\varvec{\theta }^{*})=\frac{\partial ^{2}}{\partial \varvec{\theta } \partial {\varvec{\theta }}^\top }L_{n}(\varvec{\theta }^{*}) - \frac{\partial ^{2}}{\partial \varvec{\theta } \partial {\varvec{\theta } }^\top }L_{n}(\varvec{\theta }_{0}), \end{aligned}$$

and \(\varvec{\theta }^{*}=\varvec{\theta }^{*}(X_{1}, \ldots , X_{n};\varvec{\theta })\) is an intermediate point between \(\varvec{\theta }\) and \(\varvec{\theta }_{0}\). According to Bondon’s approach, for proving the parts (i) and (ii), we must check conditions (A1)–(A4). To check condition (A1), by (26), we obtain

$$\begin{aligned} \begin{aligned} \frac{\partial }{\partial \varphi }l(\varvec{x};\varvec{\theta })&=\frac{x_{1}y}{\sigma ^{2}} - \frac{\lambda x_{1}}{\sigma }W\left( \frac{\lambda y}{\sigma }\right) , \\ \frac{\partial }{\partial \mu }l(\varvec{x};\varvec{\theta })&=\frac{y}{\sigma ^{2}} - \frac{\lambda }{\sigma }W\left( \frac{\lambda y}{\sigma }\right) , \\ \frac{\partial }{\partial \sigma ^{2}}l(\varvec{x};\varvec{\theta })&=- \frac{1}{2\sigma ^{2}} + \frac{y^{2}}{2\sigma ^{4}}- \frac{\lambda y}{2\sigma ^{3}}W \left( \frac{\lambda y}{\sigma }\right) , \\ \frac{\partial }{\partial \lambda }l(\varvec{x};\varvec{\theta })&=\frac{y}{\sigma }W\left( \frac{\lambda y}{\sigma }\right) . \end{aligned} \end{aligned}$$
(27)

Finiteness of \(\mathbb {E}X_{t}^{2}\) and \(\mathbb {E}[Y^{k}W(\frac{\lambda _{0}Y}{\sigma _{0}})]\) for \(k=0, 1\) where \(Y\sim SN(0, \sigma , \lambda )\) imply that for \(1\le i\le 4\)

$$\begin{aligned} \mathbb {E}\left| \frac{\partial }{\partial \theta _{i}} l(X_{t}, X_{t-1};\varvec{\theta }_{0})\right| <\infty . \end{aligned}$$
(28)

The process \(U_{t}=\frac{\partial }{\partial \varvec{\theta } }l(X_{t}, X_{t-1};\varvec{\theta }_{0})\) is strictly stationary and ergodic, because \((X_{t})\) is strictly stationary and ergodic. Therefore, from the pointwise ergodic Theorem for stationary sequences and (28), we have

$$\begin{aligned} n^{-1}\frac{\partial }{\partial \varvec{\theta }_{0}}L_{n}(\varvec{\theta } )=n^{-1}\sum \limits _{t=2}^{n}\frac{\partial }{\partial \varvec{\theta }} l(X_{t}, X_{t-1};\varvec{\theta }_{0})\overset{a.s}{\longrightarrow }\mathbb {E}U_{t}, \end{aligned}$$

hence, for checking condition (A1), we must prove \(\mathbb {E}U_{t}=0\). Since \(Y_{t}=Z_{t}-\mu _{0}\sim SN(0, \sigma , \lambda )\) and \(X_{t-i}\) and \(Z_{t}\) are independent when \(i>0\) (\(X_{t}\) is causal), we have

$$\begin{aligned} \mathbb {E}(Z_{t}-\mu _{0})= & {} b_{0}\sigma _{0}\delta _{0}, \nonumber \\ \mathbb {E}(Z_{t}-\mu _{0})^{2}= & {} \sigma _{0}^{2}, \nonumber \\ \mathbb {E}\left[ W\left( \frac{\lambda _{0}(Z_{t}-\mu _{0)}}{\sigma _{0}}\right) \right]= & {} \frac{b}{\sqrt{1+\lambda _{0}^{2}}}, \nonumber \\ \mathbb {E}\left[ (Z_{t}-\mu _{0})^{k} W\left( \frac{\lambda _{0}(Z_{t}-\mu _{0)}}{\sigma _{0}}\right) \right]= & {} 0, \quad (k \hbox { is odd}) \\ \mathbb {E}\left[ \left( Z_{t}-\mu _{0}\right) ^{2} W \left( \frac{\lambda _{0}\left( Z_{t}-\mu _{0}\right) }{\sigma _{0}}\right) \right]= & {} \frac{b\sigma _{0}^{2}}{\left( 1+\lambda _{0}^{2}\right) \sqrt{1+\lambda _{0}^{2}}}, \nonumber \\ \mathbb {E}[X_{t-1} (Z_{t}-\mu _{0})]= & {} 0, \nonumber \\ \mathbb {E}\left[ X_{t-1} W\left( \frac{\lambda _{0}(Z_{t}-\mu _{0)}}{\sigma _{0}} \right) \right]= & {} m\frac{b}{\sqrt{1+\lambda _{0}^{2}}},\nonumber \end{aligned}$$
(29)

By (27) and (29), we get \(\mathbb {E}U_{t}=0\) and condition (A1) is checked.

For checking condition (A2), according to (27), we obtain all the second order partial derivatives of \(l(\varvec{x};\varvec{\theta })\) with respect to \(\varvec{\theta }\) as follows:

$$\begin{aligned} \begin{aligned} \frac{\partial ^{2}}{\partial \varphi ^{2}}l(\varvec{x};\varvec{\theta })&=- \frac{x_{1}^{2}}{\sigma ^{2}}- \frac{\lambda x_{1}}{\sigma }\frac{\partial }{\partial \varphi } W\left( \frac{\lambda y}{\sigma }\right) , \\ \frac{\partial ^{2}}{\partial \varphi \partial \mu }l(\varvec{x};\varvec{\theta })&=- \frac{x_{1}}{\sigma ^{2}}- \frac{\lambda x_{1}}{\sigma }\frac{\partial }{\partial \mu }W\left( \frac{\lambda y}{\sigma }\right) , \\ \frac{\partial ^{2}}{\partial \varphi \partial \sigma ^{2}}l(\varvec{x};\varvec{\theta })&=- \frac{x_{1}y}{\sigma 4}+ \frac{\lambda x_{1}}{2\sigma ^{3}}W\left( \frac{\lambda y}{\sigma }\right) - \frac{\lambda x_{1}}{\sigma }\frac{\partial }{\partial \sigma ^{2}}W\left( \frac{\lambda y}{\sigma }\right) , \\ \frac{\partial ^{2}}{\partial \varphi \partial \lambda }l(\varvec{x};\varvec{\theta })&=- \frac{x_{1}}{\sigma }W\left( \frac{\lambda y}{\sigma }\right) - \frac{\lambda x_{1}}{\sigma } \frac{\partial }{\partial \lambda }W\left( \frac{\lambda y}{\sigma }\right) , \\ \frac{\partial ^{2}}{\partial \mu ^{2}}l(\varvec{x};\varvec{\theta })&=- \frac{1}{\sigma ^{2}}- \frac{\lambda }{\sigma }\frac{\partial }{\partial \mu } W\left( \frac{\lambda y}{\sigma }\right) , \\ \frac{\partial ^{2}}{\partial \mu \partial \sigma ^{2}}l(\varvec{x};\varvec{\theta })&=- \frac{y }{\sigma ^{4}}+ \frac{\lambda }{2\sigma ^{3}}W\left( \frac{\lambda y}{\sigma }\right) - \frac{\lambda }{\sigma }\frac{\partial }{\partial \sigma ^{2}}W \left( \frac{\lambda y}{\sigma }\right) , \\ \frac{\partial ^{2}}{\partial \mu \partial \lambda }l(\varvec{x};\varvec{\theta })&=- \frac{W( \frac{\lambda y}{\sigma })}{\sigma }- \frac{\lambda }{\sigma }\frac{\partial }{\partial \lambda }W\left( \frac{\lambda y}{\sigma }\right) , \\ \frac{\partial ^{2}}{\partial \sigma ^{4}}l(\varvec{x};\varvec{\theta })&=\frac{1}{2\sigma ^{4}}- \frac{y^{2}}{\sigma ^{6}}+ \frac{3}{4}\frac{\lambda y}{\sigma ^{5}}W\left( \frac{\lambda y}{\sigma }\right) - \frac{\lambda y}{2\sigma ^{3}}\frac{\partial }{\partial \sigma ^{2}}W\left( \frac{\lambda y}{\sigma }\right) , \\ \frac{\partial ^{2}}{\partial \sigma ^{2}\partial \lambda }l(\varvec{x};\varvec{\theta })&=- \frac{y}{\sigma ^{3}}W\left( \frac{\lambda y}{\sigma }\right) - \frac{\lambda y}{2\sigma ^{3}}\frac{\partial }{\partial \lambda }W\left( \frac{\lambda y}{\sigma }\right) , \\ \frac{\partial ^{2}}{\partial \lambda ^{2}}l(\varvec{x};\varvec{\theta })&=\frac{y}{\sigma } \frac{\partial }{\partial \lambda }W\left( \frac{\lambda y}{\sigma }\right) , \end{aligned} \end{aligned}$$
(30)

where \(y=x_{0}-\varphi x_{1}-\mu , W(.)=\frac{\phi (.)}{\varPhi (.)}\) and

$$\begin{aligned} \begin{aligned} \frac{\partial }{\partial \varphi }W\left( \frac{\lambda y}{\sigma }\right)&=\frac{\lambda ^{2}}{\sigma ^{2}}x_{1}y W\left( \frac{\lambda y}{\sigma }\right) + \frac{\lambda }{\sigma }x_{1} W^{2}\left( \frac{\lambda y}{\sigma }\right) , \\ \frac{\partial }{\partial \mu }W\left( \frac{\lambda y}{\sigma }\right)&=\frac{\lambda ^{2}}{\sigma ^{2}}yW\left( \frac{\lambda y}{\sigma }\right) + \frac{\lambda }{\sigma }W^{2}\left( \frac{\lambda y}{\sigma }\right) , \\ \frac{\partial }{\partial \sigma ^{2}}W\left( \frac{\lambda y}{\sigma }\right)&=\frac{\lambda ^{2}}{2\sigma ^{4}}y^{2}W\left( \frac{\lambda y}{\sigma }\right) + \frac{\lambda y}{2\sigma ^{3}}W^{2} \left( \frac{\lambda y}{\sigma }\right) , \\ \frac{\partial }{\partial \lambda }W\left( \frac{\lambda y}{\sigma }\right)&=- \frac{\lambda }{\sigma ^{2}}y^{2}W\left( \frac{\lambda y}{\sigma }\right) - \frac{y}{\sigma }W^{2}\left( \frac{\lambda y}{\sigma }\right) . \end{aligned} \end{aligned}$$
(31)

Since for \(\varvec{\theta }_{0}\in \varOmega \cap A, \mathbb {E}X_{t}^{2}\) and \(a_{k}= \mathbb {E}[Y^{k}W^2(\frac{\lambda _{0}Y}{\sigma _{0}})]\) for \(k=0, 1, 2\), are finite, we deduce

$$\begin{aligned} \mathbb {E}\left| \frac{\partial ^{2}}{\partial \theta _{i}\partial \theta _{j}}l(X_{t}, X_{t-1}; \varvec{\theta }_{0})\right| <\infty , \quad 1\le i\le j\le 4. \end{aligned}$$
(32)

The process \(V_{t}=\frac{\partial ^{2}}{\partial \varvec{\theta } \partial {\varvec{\theta }}^\top }l(X_{t}, X_{t-1}; \varvec{\theta }_{0})\) is strictly stationary and ergodic. By the pointwise ergodic Theorem, we get

$$\begin{aligned} n^{-1}V_{n}\overset{a.s}{\longrightarrow }V, \end{aligned}$$

where \(V=-\mathbb {E}(\frac{\partial ^{2}}{\partial \varvec{\theta } \partial {\varvec{\theta }}^\top }l(X_{t}, X_{t-1};\varvec{\theta }_{0}))\). By (29)–(32), the matrix V is given by

$$\begin{aligned} V=\left( \begin{array}{cccc} \left( \text {Cum}_{{\widetilde{X}}, {\widetilde{X}}}(0) +m^{2}\right) c_{1} &{}\quad mc_{1} &{}\quad mc_{2} &{}\quad mc_{3}\\ mc_{1} &{}\quad c_{1} &{}\quad c_{2} &{}\quad c_{3}\\ mc_{2} &{}\quad c_{2} &{}\quad \frac{1}{2\sigma _{0}^{4}}+ \frac{\lambda _{0}^{2}}{4\sigma _{0}^{4}}c_{4} &{}\quad - \frac{\lambda _{0}}{2\sigma _{0}^{2}}c_{4}\\ mc_{3} &{}\quad c_{3} &{}\quad - \frac{\lambda _{0}}{2\sigma _{0}^{2}}c_{4} &{}\quad c_{4} \end{array} \right) . \end{aligned}$$

The matrix V is positive definite, because \(\det (V_{k})>0, \ 1\le k\le 4\), for \(\varvec{\theta }_{0}\in \varOmega \cap A\) which reduce to (15) and (16), where \(V_{k}\) is the \(k\times k\) submatrix formed by deleting the last \((4-k)\) rows and last \((4-k)\) columns of V (Lemma 1, Searle 1971).

To prove condition (A3), similar to Bondon’s method, we shall prove there exist measurable functions \(g_{i, j, k}:\mathbb {R}^{2}\rightarrow \mathbb {R} 1\le i, j, k\le 4\), such that

$$\begin{aligned} \left| \frac{\partial ^{3}}{\partial \theta _{i}\partial \theta _{j}\partial \theta _{k}}l(\varvec{x};\varvec{\theta })\right|<g_{i, j, k}(\varvec{x})\quad \hbox {and}\quad \mathbb {E} \left( g_{i, j, k}(X_{t}, X_{t-1})\right) <\infty , \end{aligned}$$
(33)

for almost all \(\varvec{x}\in \mathbb {R}^{2}\). But all the third order partial derivatives of \(l(\varvec{x};\varvec{\theta })\) with respect to \(\varvec{\theta }\) take sum of the forms

$$\begin{aligned} h(\varvec{x};\varvec{\theta })=\frac{r\lambda ^{i}x_{1}^{j}y^{k}W^{l} \left( \frac{\lambda y}{\sigma }\right) }{\sigma ^{q}}, \end{aligned}$$
(34)

where r is an integer and ijklq are nonnegative integers. Hence, to prove (33), it is sufficient to show that there exist a measurable function \(g:\mathbb {R}^{2}\rightarrow \mathbb {R}\) such that

$$\begin{aligned} \left| h(\varvec{x};\varvec{\theta })\right| \le g(\varvec{x}), \quad \hbox {and}\quad \mathbb {E}\left( g(X_{t}, X_{t-1})\right) <\infty , \end{aligned}$$

for all \(\varvec{\theta } \in N\) and for almost all \(\varvec{x}\in \mathbb {R}^{2}\). Now, we choose N, which for all \(\varvec{\theta } \in N\)

$$\begin{aligned} \left| \varphi \right|<1, \left| \mu \right| \le 2\left| \mu _{0}\right| , \sigma >\frac{\sigma _{0}}{2}, \ \left| \lambda \right| <\left| \lambda _{0}\right| , W\left( \frac{\lambda y}{\sigma }\right) \le W\left( \frac{2}{\sigma _{0}}\left| \lambda _{0}y\right| \right) , \end{aligned}$$
(35)

then for all \((\varvec{\theta }, \varvec{x})\in N\times \mathbb {R}^{2}\), we have

$$\begin{aligned} \left| h(\varvec{x};\varvec{\theta })\right|\le & {} \frac{2^{q} \left| r\right| \left| \lambda _{0}\right| ^{i} \left| x_{1}\right| ^{j} \left| y\right| ^{k}}{\sigma _{0}^{q}}W^{l}\left( \left| \frac{2\lambda _{0}y}{\sigma _{0}}\right| \right) \\\le & {} \frac{2^{q} \left| r\right| \left| \lambda _{0}\right| ^{i} \left| x_{1}\right| ^{j}}{\sigma _{0}^{q} } (\left| x_{0}\right| +\left| x_{1}\right| +2\left| \mu _{0}\right| )^{k}(\sqrt{2/ \pi })^{l}. \quad \hbox {(a.e.)} \end{aligned}$$

Finiteness of \(\mathbb {E}X_{t}^{k}, k\ge 1\), conclude that

$$\begin{aligned} \mathbb {E}(\left| X_{t}\right| +\left| X_{t-1}\right| +2\left| \mu _{0}\right| )^{k} \left| X_{t-1}\right| ^{i}<\infty , \end{aligned}$$

and then (A3) is proved. The method of checking condition (A4) is similar to Bondon’s approach. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sharafi, M., Nematollahi, A.R. AR(1) model with skew-normal innovations. Metrika 79, 1011–1029 (2016). https://doi.org/10.1007/s00184-016-0587-7

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-016-0587-7

Keywords

Navigation