Linear normal models are among the most frequently used models in statistics due to their simplicity and desirable mathematical properties. [[Univariate Linear Regression|Linear Regression]] is the most prominent example of this model class. Key characteristics of linear normal models are listed below: **Independence and normality:** The parameters $\Theta_j$ and the observations $X_i$ are independent [[Random Variable|r.v's.]], each following a [[Gaussian Distribution]]. **Linearity:** The model combines these variables in a linear form. $ Y_i= \sum_{j=1}^p \Theta_j \phi_{ij} + W_i$ where: - Prior: $\Theta_j \sim \mathcal N(\mu, \sigma^2)$ - Noise: $W_i \sim \mathcal N(0, \sigma_W^2)$ - Covariates or basis functions: $\phi_{ij}$ for observation $i$ corresponding to $\Theta_j$ **Normality of Posterior:** Because the prior and likelihood are Gaussian, the posterior distribution of $\Theta$ is Gaussian as well. This holds true for the both marginal posterior of $\Theta_j$ and the joint posterior of $\{\Theta_1, \dots, \Theta_p\}$. **Estimation:** The [[MAP Estimator|MAP estimate]] is found by minimizing the quadratic exponent of the posterior. Therefore we need to differentiate w.r.t. each separately, resulting in a system of linear equations. **Equivalence of MAP and LMS estimates:** Both [[MAP Estimator]] and [[LMS Estimator]] lead to the same estimates. This is due to the shape of the posterior, as the Gaussian is symmetric and unimodal. $ \hat \Theta_{\text{MAP}}=\hat \Theta_{\text{LMS}}=\mathbb E[\Theta_j \vert X] $