The autoregressive model of order $p$, written as $\text{AR}(p)$, expresses a time series as a function of its own past values and a [[White Noise Model|White Noise Term]] $W_t$:
$ X_t=c+\phi_1 X_{t-1}+\dots+\phi_pX_{t-p}+W_t $
where:
- $X$: Past values of the time series
- $\phi_i$: Autoregressive coefficient weighting the past values
- $c$: Fixed intercept term
- $W_t$: White noise term
![[autoregressive-model.png|center|400]]
## Stationarity
In the simple case of $\mathrm{AR}(1)$, the property of [[Stationarity]] depends on the $\phi$ coefficient.
- If $\lvert \phi \rvert <1$, the process is *stationary*.
- If $\phi=1$, the model is a [[Random Walk Model|Random Walk]], which is proven to be *non-stationary*.
- If $\phi>1$, the series explodes and the model is *non-stationary*.
For $\mathrm{AR}(p)$ models where $p>1$, the stationarity condition is more complex to be evaluated. For such a model to be stationary, the roots of the characteristic equation must lie outside the unit circle.
## Expectation of AR(1)
The $\mathrm{AR}(1)$ model takes the following form:
$ X_t = c+\phi X_{t-1}+W_t$
The expectation of this model is only defined when it is stationary. Then it can be written as follows:
$
\begin{align}
\mathbb E[X_t] &= \mathbb E[c+\phi X_{t-1}+W_t] \tag{1}\\[4pt]
\mathbb E[X_t] &= c+ \phi\mathbb E[X_{t-1}] + \mathbb E[W_t] \tag{2}\\[4pt]
\mathbb E[X_t] &= c+ \phi\mathbb E[X_{t-1}] \tag{3}\\[4pt]
\mu &=c+\phi \mu \tag{4}\\[2pt]
\mu &=\frac{c}{1-\phi} \tag{5}
\end{align}
$
where:
- (1) Take expectations on both sides.
- (2) Apply [[Linearity of Expectations]] to split up the expectation of the sum into sum of expectations.
- (3) The expectation of the noise term is zero and therefore cancels out.
- (4) By assuming stationarity we can set both $\mathbb E[X_t]$ and $\mathbb E[X_{t-1}]$ to $\mu$, as expectation stays constant in a stationary model.
## Autocovariance of AR(1)
The $\text{AR}(1)$ process only considers the previous term of the process (weighted by a coefficient $\phi$) together with the noise term $W_t$. As the addition of a constant does not impact [[Covariance#Covariance after Linear Transformation|Covariance]], we can omit the $c$ intercept.
$ X_t = \phi X_{t-1} + W_t $
**Autocovariance with 0 Lag:**
$\gamma(0)=\mathrm{Cov}(X_t, X_{t-1})=\mathrm{Var}(X_t) $
**Autocovariance with 1 Lag:**
$
\begin{align}
\gamma(1) &= \mathrm{Cov}(X_t, X_{t-1}) \tag{1}\\[6pt]
&= \mathrm{Cov}(\phi X_{t-1} + W_t, X_{t-1}) \tag{2}\\[6pt]
&= \phi*\mathrm{Cov}(X_{t-1}, X_{t-1}) + \mathrm{Cov}(W_t,X_{t-1}) \tag{3}\\[6pt]
&=\phi*\gamma(0) \tag{4}
\end{align}
$
where:
- (3) Linearity of covariance allows me to separate covariance terms.
- (4) The covariance of $X_{t-1}$ with itself is the autocovariance with zero lag $\gamma(0)$, i.e. the variance of $X_t$. The covariance of $X_{t-1}$ with the next noise term $W_t$ is zero, as the two r.v's. are independent.
**Autocovariance with 2 Lags:**
$
\begin{align}
\gamma(2) &= \mathrm{Cov}(X_t, X_{t-2}) \tag{1}\\[6pt]
&= \mathrm{Cov}(\phi X_{t-1} +W_t, X_{t-2}) \tag{2}\\[6pt]
&= \mathrm{Cov}(\phi (\phi X_{t-2}+W_{t-1}) + W_t, X_{t-2})\tag{3}\\[6pt]
&= \mathrm{Cov}(\phi^2 X_{t-2}+\phi W_{t-1} + W_t, X_{t-2})\tag{4}\\[6pt]
&= \phi^2\mathrm{Cov}(X_{t-2}, X_{t-2})+\phi\mathrm{Cov}(W_{t-1}, X_{t-2})+\mathrm{Cov}(W_t, X_{t-2})\tag{5}\\[6pt]
&=\phi^2\gamma(0) \tag{6}
\end{align}
$
(6) Expressing $X_{t-1}$ in terms of $X_{t-2}$.
**Autocovariance Generalized:**
We recognize a pattern for the autocovariance of any lag $h$. This also shows that when $\lvert \phi \rvert <1$, then $\phi^h$ converges to $0$ as $h\to \infty$. Hence an $\text{AR}(1)$ process with $\lvert \phi \rvert <1$ is stationary.
$ \gamma(h)=\phi^h*\gamma(0) $
## Variance of AR(1)
By writing out the definition of $X_t$, we get the [[Variance of Sum of Random Variables]], which can be broken into individual variances.
$
\begin{align}
\mathrm{Var}(X_t) &= \mathrm{Var}(\phi X_{t-1}+W_t+c)\\[2pt]
&= \mathrm{Var}(\phi X_{t-1})+\mathrm{Var}(W_t)+\mathrm{Var}(c) \\[2pt]
&= \phi^2 \mathrm{Var}(X_{t-1})+\mathrm{Var}(W_t)\\[2pt]
\end{align}
$
Assuming stationarity, the variance of $X_t$ and $X_{t-1}$ is the same.
$
\begin{align}
\mathrm{Var}(X_t)-\phi^2 \mathrm{Var}(X_{t}) &= \mathrm{Var}(W_t)\\[6pt]
\mathrm{Var}(X_t)&= \frac{\mathrm{Var}(W_t)}{1- \phi^2}
\end{align}
$
Consequently, the full auto-covariance function is:
$
\begin{align}
\gamma(h) &= \phi^h*\gamma(0) \\[6pt]
\gamma(h) &= \phi^h*\frac{\sigma^2}{1-\phi^2}
\end{align}
$