## Well-Behaved Markov Chains In a well-behaved [[Markov Chain]], the following convergence properties are typically observed: - *Convergence to steady state probabilities:* The n-step transition probability $r_{ij}(n)$ converges to some constant $\pi_j$, when $n \to \infty$. $ r_{ij}(n) \xrightarrow[n\to \infty]{} \pi_j $ - *Asymptotic independence from initial state:* The starting state $i$ of a transition probabilities over large number of $n$ steps gets asymptotically unimportant. So it gives the same transition probability as if it had started at some other state $j$. $ r_{ik} \approx r_{jk} $ ## Non-Converging Markov Chains Certain Markov chains fail to exhibit convergence due to structural issues. Here are two examples: **Periodic Structure:** When starting at state $2$, the MC will always transition to another state $1$ or $3$ in its first step. However in the consecutive steps it will transition back to state $2$ with certainty. ![[periodic-markov-chain.png|center|250]] The n-step transition probabilities will always switch between $0$ and $1$ dependent if $n$ is odd or even. Thus, we have no convergence in transition probabilities $ \begin{align} n \text{ is even: }r_{22}(n)=1 \\ n \text{ is odd: }r_{22}(n)=0 \end{align} $ **Sticky Start:** When some states are not accessible from other states, then the starting state remains important even if $n \to \infty$. ![[sticky-start-markov-chain.png|center|400]] When starting in state $1$, we will stay in that state with certainty, even when $n \to \infty$. $\begin{align} r_{11}(n)=1 \\ r_{31}(n)=0 \end{align}$ ## Recurrent & Transient States **Recurrent states:** Starting from state $i$, no matter where my path takes me, there is always a way back to state $i$. **Recurrent class:** A collection of recurrent states that are only communicating within each other. They are not connected to another recurrent class by any path. The importance of the initial state will not vanish as soon as there is $\ge1$ classes of recurrent states in the MC (e.g. below we will never end up in state $5$ when we started in state $8$). **Transient states:** Every state that is not recurrent. Transient states will only be visited a finite number of times even when $n \to \infty$. Thus the transition probability to end up in a transient state converges to $0$. ![[recurrent-states-markov-chain.png|center|400]] ## Periodicity The states in a recurrent class are periodic if they can be grouped into $\ge 1$ groups, so that all transitions from one group lead to another group with $\mathbf P(\cdot)=1$. On this grouped aggregation level the system is deterministic and periodic. As soon as any state in the recurrent class has a self transition probability, it is not periodic anymore. Although just one self transition might have a small effect, when $n \to \infty$ it will converge to steady-state probabilities. ![[periodicity-markov-chain.png|center|300]] ## Steady-State Probabilities Steady-state probabilities will only appear in MC’s where there is only one recurrent class, and this class is not periodic. If these conditions are met we can derive steady-state probabilities, by taking the limit as $n \to \infty$ on the key recursion formula. $ \begin{align} r_{ij}(n)&=\sum_{k=1}^m r_{ik}(n-1)*p_{kj} \\[8pt] \lim_{n\to \infty} r_{ij}(n) &=\sum_{k=1}^m r_{ik}(\infty)*p_{kj} \\[14pt] \pi_j&=\sum_k \pi_k*p_{kj} \quad \text{(for }j=1, \dots m) \end{align} $ The resulting equation is called the “balance equation” to find a unique solution for all $\pi$. Since all $\pi$-terms are probabilities, we know that they need to sum to $1$. With this additional constraint, we can solve a system of linear equations for all $\pi$. $ \sum_{j=1}^m \pi_j = 1 $ ## Visit Frequencies Visit frequencies provide a frequentist interpretation of probabilities in Markov chains. By observing state transitions over a large number of steps, we can approximate steady-state probabilities and transition behaviors based on observed averages. **Frequency of being in $j$:** - Let $Y_j(n)$ be the number of times that the chain visits state $j$ in $n$ steps. - The relative frequency of visits approximates the steady-state probability $\pi_j$. $ \frac{Y_j(n)}{n} \approx \pi_j, \quad \text{as} \:n\to \infty $ **Frequency of transitions from $i \to j$:** - Let $Z_{ij}$ be the number of transitions from state $i$ to state $j$ in $n$ steps. - The relative frequency of such a transition approximates the steady state probability of being in state $i$ (i.e. $\pi_i$) and the transition probability $p_{ij}$. $ \frac{Z_{ij}(n)}{n} \approx \pi_i*p_{ij}, \quad \text{as} \:n\to \infty $ **Frequency of transitions into $j$:** - Let $Z_{j}$ be the number of transitions into state $j$ from any other state in $n$ steps. - The relative frequency of $Z_j$ approximates the summing over the probabilities of being in any state $k$ multiplied by the respective probability to get from there to $j$. $ \frac{Z_{j1}(n)}{n} \approx \sum_k \pi_k*p_{kj}, \quad \text{as} \:n\to \infty $