We want to have a centrality score for a node, that is the weighted sum of centrality scores from all neighbors of that node. Since the scores of the neighbors also depend on their respective neighbors, it is an iterative problem. **Initial Iteration Step:** We multiply the [[Network Types#^daa3e3|Adjacency Matrix]] $A$ with an initialized centrality vector $x^{(0)}$, which is usually filled with all $1’s$, to obtain the updated centrality vector $x^{(1)}$. ![[eigenvector-centrality.jpeg|center|450]] $ \begin{align} x^{(1)}_i &= \sum_{j=1}^n A_{ij} x_j^{(0)} \\[10pt] x^{(1)}&=A x^{(0)} \\[6pt] x^{(2)}&=A x^{(1)} =A^2x^{(0)}\\[6pt] x^{(k)}&=A^kx^{(0)} \end{align} $ where: - (1) We take the $i$-th row of $A$ and multiply this vector with the initialized centrality vector $x^{(0)}$. This inner product returns the updated centrality for node $i$. - (2) We do the update on all nodes at once in the course of an iteration. - (4) The centrality of node $i$ after $k$ iterations $x_i^{(k)}$ relates to the number of nodes, whose walk after $k$ hops ends up in node $i$. As we want some convergence statement, taking infinite [[Powers of the Adjacency Matrix]] $A^k$ is not feasible. Therefore we do [[Eigenvectors|eigenvalue decomposition]] of $A$ to find such a convergence statement. **Eigenvectors:** We know that the [[Eigenvectors|Eigenvector]] $v$ of a matrix $A$ can be found, where its product $A \mathbf v$ results in the same vector $v$ but scaled by some factor $\lambda$ (eigenvalue). $ \begin{aligned} A\mathbf v&=\lambda \mathbf v \\ A&=\lambda \mathbf v \mathbf v^T \end{aligned} $ >[!Note:] >It can be shown that taking the power of $A$ does not affect the eigenvector of the matrix, just the eigenvalue. Therefore the following holds: $A^k = \lambda ^kvv^T$ $ \begin{align} x^{(k)}&=A^kx^{(0)} \\[6pt] &=\sum_{i=1}^n \lambda^k_i v_i v_i^T x^{(0)} \\[12pt] &=\lambda_1^kv_1v_1^Tx^{(0)}+\dots+\lambda_n^kv_nv_n^Tx^{(0)} & (\lambda_1 \ge \lambda_2 \ge \dots) \\[8pt] &=\lambda_1^k \Big(v_1v_1^Tx^{(0)}+\frac{\lambda_2^k}{\lambda_1^k}v_2v_2^Tx^{(0)}+\dots\Big)\\[12pt] &=\lambda_1^kv_1v_1^Tx^{(0)} & (\text{as } k \to \infty) \end{align} $ where: - (6) Decomposing the matrix to the sum of all eigenvectors and scaled by all eigenvalues. - (7) We sort $v$ and $\lambda$, such that the largest eigenvalue $\lambda_i$ and its corresponding eigenvector $v_i$ are at the first position $\lambda_1, v_1$. - (8) When $\lambda_1 > \lambda_2$, and we let $k\to \infty$, the expression $\frac{\lambda_2^k}{\lambda_1^k}$ converges to zero. Therefore it can be neglect, and we only remain with the first eigenvector and eigenvalue. **Perron-Frobenius Theorem:** Given that all entries of the matrix $A^k$ are gt;0$, we can show that $x^{(k)}$ converges to a vector solely governed by the first [[Principal Component Analysis|principal component]]. $ x^{(k)} \xrightarrow[]{k \to \infty}\lambda_{\max}^k v\underbrace{v^Tx^{(0)}}_{\alpha} $ **Directed Networks:** When edges are directed, we have to define if we look at the centrality from incoming or outgoing edges. While the Perron-Frobenius theorem applies in any case, the construction of the principal components differs as the adjacency matrix will not be symmetric. - *Outgoing edges:* i.e. right eigenvectors are calculated row-wise, as a row $i$ in the adjacency matrix $A$, states the edges coming from node $i$. $ A\mathbf v= \lambda \mathbf v $ - *Incoming edges:* i.e. left eigenvectors are calculated column-wise, as a column $j$ in the adjacency matrix $A$, states the edges coming from node $j$. $ \mathbf v^TA=\lambda \mathbf v^T $