The random walk is a special case of the [[Autoregressive Model]] with only 1 lagged term and the coefficient $\phi = 1$. Optionally a deterministic drift term $\delta$ can be added:
$
\begin{align}
X_t &= X_{t-1}+W_t \\[2pt]
X_t &= X_{t-1}+W_t+ \delta
\end{align}
$
We can express the random walk as a sum of [[White Noise Model|White Noise]] terms that are added to the root at $X_0$.
$
\begin{align}
X_t &= X_{t-1}+W_t \\[4pt]
&= X_{t-2}+W_{t-1}+W_t \\[4pt]
&= X_0+W_1+W_2+ \dots + W_t\\
&=X_0+\sum_{i=1}^t W_i
\end{align}
$
## Stationarity of Random Walk
The property of [[Stationarity]] can be ruled out for the *Random Walk with drift term*, as [[Expectation]] is not constant over time due to the drift term. However for the case *without drift* we need to check the assumptions of [[Stationarity#Weak Stationarity|Weak Stationarity]] as it is not trivial.
**Constant Expectation:**
At $t=0$ the expectation of $\mathbb E[X_t]=0$, as it just sums up the $0$ expectations of all future white noise terms (assuming that at the origin $\mathbb E[X_0]=0$).
$ \mathbb E[X_t] = \mathbb E[X_0]+\mathbb E[W_1]+\dots +\mathbb E[W_t]=0 $
At $t>0$, the path until $(t-1)$ has realized, and the expectation at $t$ is dependent on the realization at $(t-1)$. The realized white noise terms do not have $0$ expectation anymore. Their expectation is their respective realization.
$ \mathbb E[X_t] = X_{t-1} $
Also, there is no mean-reverting trend inherent in the random walk process that would e.g. bring back again a path that has deviated away from $0$.
>[!note:]
>We conclude that their is no constant expectation at all $t$.
**Constant Variance:**
The [[Variance]] of each $W$ is $\sigma_W^2$. Since all these r.v’s. are [[Independence and Identical Distribution|i.i.d.]], the [[Variance of Sum of Random Variables]] is the sum of the variances. Thus the variance after $t$ steps is $t\sigma_W^2$.
$
\begin{align}
\mathrm{var}(X_t) &=\mathrm{var}(W_1)+\dots+\mathrm{var}(W_t)\\[6pt]
\mathrm{var}(X_t) &=t\sigma_W^2
\end{align}
$
Since the variance grows linearly with $t$, this means that in an infinite process the variance also grows to infinity as $t\to \infty$.
>[!note:]
>We conclude that also variance is clearly not constant at all $t$, as it increases over time.
**Autocovariance only Function of Gap:**
We can also prove that the autocovariance is not stationary for the same time gap, as it is required for stationarity.
$ \begin{align}
\mathrm{cov}(X_s,X_t)
&= \mathrm{cov}\Big(\sum_{i=1}^s W_i, \sum_{j=1}^t W_j\Big) \tag{1}\\[2pt]
&=\sum_{i=1}^s \sum_{j=1}^t \mathrm{cov}(W_i, W_j) \implies
\begin{cases} \sigma_W^2 & \text{if} \quad i=j\\ 0 & \text{if} \quad i \not=j\\ \end{cases} \tag{2}\\[6pt]
&=\min(s,t)*\sigma_W^2 \tag{3}
\end{align}
$
where:
- (1) The $X_s, X_t$ can be expressed as the sum of white noise terms.
- (2) Due to [[Covariance#Covariance of Sums of Random Variables|Linearity of Covariance]] the sums can be put outside. By definition of the random walk, we know that the autocovariance for each $i \not = j$ is $0$. Only when $i=j$, we get the variance $\sigma_W^2$.
- (3) Since we have $\min(s,t)$ occasions in the double sum where $i=j$, the result is this many times the variance.
>[!note:]
>We can see that the autocovariance is governed by the absolute magnitude of $s,t$ i.e. $\min(s,t)$ and not their difference $|s-t|$. Therefore we conclude that the autocovariance also does not meet the stationarity requirements.