Abstract
In this paper, we consider an autoregressive model of order one with skew-normal innovations. We propose several methods for estimating the parameters of the model and derive the limiting distributions of the estimators. Then, we study some statistical properties and the regression behavior of the proposed model. Finally, we provide a Monte Carlo simulation study for comparing performance of estimators and consider a real time series to illustrate the applicability of the proposed model.
Similar content being viewed by others
References
Azzalini A (1985) A class of distribution which includes the normal ones. Scand J Stat 12:171–178
Bondon P (2009) Estimation of autoregressive models with epsilon-skew-normal innovations. J Multivariate Anal 100:1761–1776
Box GEP, Jenkins GM (1976) Time series analysis: forecasting and control. Holden Day, San Francisco
Brockwell PJ, Davis RA (1991) Time series: theory and methods, 2nd edn. Springer, New York
Charalambides Ch A, Koutras Markovs V, Balakrishnan N (2001) Probability and statistical models with applications. Chapman and Hall, London
Henze N (1986) A probabilistic representation of the skew-normal distribution. Scand J Stat 13:271–275
Jacobs PA, Lewis PAW (1977) A mixed autoregressive-moving average exponential sequence and point process, EARMA (1,1). Adv Appl Probability 9:87–104
Klimko LA, Nelson PI (1978) On conditonal least squares estimation for stochastic processes. Ann Stat 6:629–642
Lawrance AJ, Lewis PAW (1985) Modelling and residual analysis of nonlinear autoregressive time series in exponential variables. J Roy Stat Soc B 47:165–183
Pourahmadi M (1988) Stationarity of the solution of \( X_{t}=\) \(A_{t}X_{t-1}+\varepsilon _{t}\) and analysis of non-Gaussian dependent random variables. J Time Ser Anal 9:225–239
Pourahmadi M (2001) Foundation of time series analysis and prediction theory. Wiley, New York
Searle SR (1971) Linear models. John Wiley and Sons, New York
Stout WF (1974) Almost sure convergence. Academic Press, New York
Tarami B, Purahmadi M (2003) Multi-variate t autoregression: Innovations, predictions variances and exact likelihood equation. J Time Ser Anal 24:739–754
Acknowledgments
The authors are grateful to the editor and an associate editor for their valuable encouraging comments and suggestions. This work was supported by the Research Council of Shiraz University.
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
Proof of Theorem 2
Consider the log-likelihood \(L_{n}(\varvec{\theta })\) in (14), where for all \(\varvec{x}\in \mathbb {R}^{2}\),
with \(y=x_{0}-\varphi x_{1}-\mu \) and c is a constant. Similar to Bondon’s approach, the basic technique of this proof is to control the behaviour of the first and second order in a Taylor expansion of \(L_{n}(\varvec{\theta })\) about \(\varvec{\theta }_{0}\). In some neighborhood \(S_{0}\) of \(\varvec{\theta }_{0}\) for almost all \(\varvec{x}\in \mathbb {R}^{2}\), the function \(l(\varvec{x};\varvec{\theta } )\) is twice continuously differentiable with respect to \(\varvec{\theta } \). Then, for \(\varepsilon >0, \left\| \varvec{\theta } -\varvec{\theta }_{0}\right\| <\varepsilon \), the Taylor expansion of \(L_{n}(\varvec{\theta })\) about \({\varvec{\theta }}_{0}\) is
where
and \(\varvec{\theta }^{*}=\varvec{\theta }^{*}(X_{1}, \ldots , X_{n};\varvec{\theta })\) is an intermediate point between \(\varvec{\theta }\) and \(\varvec{\theta }_{0}\). According to Bondon’s approach, for proving the parts (i) and (ii), we must check conditions (A1)–(A4). To check condition (A1), by (26), we obtain
Finiteness of \(\mathbb {E}X_{t}^{2}\) and \(\mathbb {E}[Y^{k}W(\frac{\lambda _{0}Y}{\sigma _{0}})]\) for \(k=0, 1\) where \(Y\sim SN(0, \sigma , \lambda )\) imply that for \(1\le i\le 4\)
The process \(U_{t}=\frac{\partial }{\partial \varvec{\theta } }l(X_{t}, X_{t-1};\varvec{\theta }_{0})\) is strictly stationary and ergodic, because \((X_{t})\) is strictly stationary and ergodic. Therefore, from the pointwise ergodic Theorem for stationary sequences and (28), we have
hence, for checking condition (A1), we must prove \(\mathbb {E}U_{t}=0\). Since \(Y_{t}=Z_{t}-\mu _{0}\sim SN(0, \sigma , \lambda )\) and \(X_{t-i}\) and \(Z_{t}\) are independent when \(i>0\) (\(X_{t}\) is causal), we have
By (27) and (29), we get \(\mathbb {E}U_{t}=0\) and condition (A1) is checked.
For checking condition (A2), according to (27), we obtain all the second order partial derivatives of \(l(\varvec{x};\varvec{\theta })\) with respect to \(\varvec{\theta }\) as follows:
where \(y=x_{0}-\varphi x_{1}-\mu , W(.)=\frac{\phi (.)}{\varPhi (.)}\) and
Since for \(\varvec{\theta }_{0}\in \varOmega \cap A, \mathbb {E}X_{t}^{2}\) and \(a_{k}= \mathbb {E}[Y^{k}W^2(\frac{\lambda _{0}Y}{\sigma _{0}})]\) for \(k=0, 1, 2\), are finite, we deduce
The process \(V_{t}=\frac{\partial ^{2}}{\partial \varvec{\theta } \partial {\varvec{\theta }}^\top }l(X_{t}, X_{t-1}; \varvec{\theta }_{0})\) is strictly stationary and ergodic. By the pointwise ergodic Theorem, we get
where \(V=-\mathbb {E}(\frac{\partial ^{2}}{\partial \varvec{\theta } \partial {\varvec{\theta }}^\top }l(X_{t}, X_{t-1};\varvec{\theta }_{0}))\). By (29)–(32), the matrix V is given by
The matrix V is positive definite, because \(\det (V_{k})>0, \ 1\le k\le 4\), for \(\varvec{\theta }_{0}\in \varOmega \cap A\) which reduce to (15) and (16), where \(V_{k}\) is the \(k\times k\) submatrix formed by deleting the last \((4-k)\) rows and last \((4-k)\) columns of V (Lemma 1, Searle 1971).
To prove condition (A3), similar to Bondon’s method, we shall prove there exist measurable functions \(g_{i, j, k}:\mathbb {R}^{2}\rightarrow \mathbb {R} 1\le i, j, k\le 4\), such that
for almost all \(\varvec{x}\in \mathbb {R}^{2}\). But all the third order partial derivatives of \(l(\varvec{x};\varvec{\theta })\) with respect to \(\varvec{\theta }\) take sum of the forms
where r is an integer and i, j, k, l, q are nonnegative integers. Hence, to prove (33), it is sufficient to show that there exist a measurable function \(g:\mathbb {R}^{2}\rightarrow \mathbb {R}\) such that
for all \(\varvec{\theta } \in N\) and for almost all \(\varvec{x}\in \mathbb {R}^{2}\). Now, we choose N, which for all \(\varvec{\theta } \in N\)
then for all \((\varvec{\theta }, \varvec{x})\in N\times \mathbb {R}^{2}\), we have
Finiteness of \(\mathbb {E}X_{t}^{k}, k\ge 1\), conclude that
and then (A3) is proved. The method of checking condition (A4) is similar to Bondon’s approach. \(\square \)
Rights and permissions
About this article
Cite this article
Sharafi, M., Nematollahi, A.R. AR(1) model with skew-normal innovations. Metrika 79, 1011–1029 (2016). https://doi.org/10.1007/s00184-016-0587-7
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00184-016-0587-7