A Markov chain (”MC”) is a [[Stochastic Process]] that satisfies the Markov property. While the [[Bernoulli Process]] and [[Poisson Process]] are memoryless, in MC the prediction of the future is conditional on the current state of the chain. ![[markov-chain-diagram.png|center|400]] We will consider a discrete-time-finite-state MC. It is a stochastic process with a finite number of different possible states over $t$ discrete time steps. **Definitions:** - *States:* We denote $X_n$ as the state after $n$ time steps, or as the state of the system at time $n$. Since the transition between states follows a stochastic process, the state $X_n$ is also a [[Random Variable]]. The initial state $X_0$ is either given or can be random as well. - *Transition probabilities:* At each successive time step $t$ the current state $i$ may change to some state $j$ with probability $\mathbf P(X_{n+1}=j \vert X_n=i)$, which is also called a “transition probability”. It has to be ensured that the transition probabilities from the current state $i$ sum to 1 → $\sum_j p_{ij}=1$. - *Markov property:* The transition probability $p_{ij}$ from state $i \to j$ is dependent on the current state, but it is independent of time and everything that is not captured by the state (e.g. no path dependence). Therefore, defining the right granularity (what is captured by the state) is crucial. $ \begin{align} p_{ij} &=\mathbf P(X_{n+1}=j \vert X_n=i) \\ &=\mathbf P(X_{n+1}=j \vert X_n=i, X_{n-1}, \dots,X_0) \end{align} $ **Model specification:** To construct a Markov chain, follow these steps: - *Identify states:* Define the finite set of possible states. - *Identify possible transitions:* Determine which transitions between states are possible. - *Assign transition probabilities:* Specify the probability $p_{ij}$​ of transitioning between states. ## N-Step Probabilities The n-step transition probability $r_{ij}(n)$ represents the probability of transitioning from state $i$ to the state $j$ in $n$ steps. This probability is independent of the starting time (at time $0$ or time $s$). $ \begin{align} r_{ij}(n)&=\mathbf P(X_n=j \vert X_0=i)\\[2pt] r_{ij}(n)&=\mathbf P(X_{n+s}=j \vert X_s=i) \end{align} $ The sum of all n-step transition probabilities that start at a fixed state $i$, and finish at all possible target states $j$ must equal $1$. This is true for any fixed state $i$. $ \sum_{j=1}^m r_{ij}(n)=1 \quad \forall i $ **Recursive Calculation:** To compute $r_{ij}(n)$, we can break down the transition over $n$ steps using intermediate states. - Transition from $i \to k$ in $(n-1)$ steps and then from $k \to j$ in $1$ step. ![[markov-chain-n-steps-1.png|center|300]] - Transition from $i \to k$ in $1$ step and then from $k \to j$ in $(n-1)$ steps. ![[markov-chain-n-steps-2.png|center|300]] Due to the Markov property, $p_{kj}$ is independent of $r_{ik}$ and vice versa. This lets us [[Independence of Events|multiply the probabilities]] to obtain a joint probability. $ \begin{align} r_{ij}(n)&=\sum_{k=1}^m r_{ik}(n-1) * p_{kj} \\ r_{ij}(n)&=\sum_{k=1}^m p_{ik} * r_{kj}(n-1) \end{align} $ **Random Initialization:** We want to know the probability, that the MC will end up in state $j$ after $n$ steps, where the initial state is random. Therefore we iterate over the PMF of the initial states, multiplied by the respective transition probability. $ \mathbf P(X_n=j)=\sum_{i=1}^m \underbrace{\mathbf P(X_0=i)}_{\text{initial state}}*r_{ij}(n)$