The page rank builds up on [[Katz Centrality]], and additionally introduces scaling of the neighbors centrality based on their out-degree.
When node $i$ has a highly central neighbor node $j$, it makes a difference if that neighbor $j$ points to $i$ exclusively, or to thousand other nodes as well. Node $i$ then has to share the attention of $j$, making the edge less important.
$ x_j^{(k+1)}= \alpha \sum_{i=1}^n A_{ij} \frac{x_i^{(k)}}{k_i^{\text{out}}} + \beta_j $
We can also write this more compact in matrix form, where $D$ is a zero [[Matrix Diagonalization#Diagonal Matrix|Diagonal Matrix]] of the out-degrees and $0$ otherwise.
$
\begin{aligned}
D&=\text{diag}(k_1^{\text{out}},\dots, k_n^{\text{out}}) \\[4pt]
D^{-1}&=\text{diag}(\frac{1}{k_1^{\text{out}}},\dots, \frac{1}{k_n^{\text{out}}})
\end{aligned}
$
The multiplication of a diagonal matrix with another matrix $A$ is equivalent to a row-wise scaling of $A$ by the respective elements of the diagonal matrix.
$
\begin{aligned}
(D^{-1} A)_{ij} &= \frac{1}{k_i^{\text{out}}}A_{ij} \\[10pt]
x^{(k+1)} &= \alpha D^{-1}Ax^{(k)}+\beta
\end{aligned}
$
As $k$ goes to infinity the $x^{(k)}$ converges to a vector $v$, which is a generalization of the [[Eigenvectors]].
$ x^{(k)} \xrightarrow[]{k \to \infty}v; \quad v=(\mathbf I - \alpha D^{-1}A)^{-1}\beta $