The Gaussian Distribution, also known as the *Normal Distribution*, describes an unbounded [[Random Variable]] $X$, meaning it can take any value in $(-\infty, \infty)$. This is because the exponential term $e^x$ in the [[Probability Density Function|PDF]] never equals zero.
However, the Gaussian has *rapidly decaying tails*, meaning that for values of $x$ far from the mean $\mu$, the probability density becomes negligible.
**Probability Density (PDF):**
$ f_{\mu, \sigma^2}(x)=\frac{1}{\sigma \sqrt{2 \pi}}*\exp\Big(-\frac{(x- \mu)^2}{2 \sigma^2} \Big) $
**Cumulative Density (CDF):**
$ F_{\mu, \sigma^2}(x)=\frac{1}{\sigma \sqrt{2 \pi}}* \int_{- \infty}^x \exp\Big(-\frac{(x- \mu)^2}{2 \sigma^2} \Big) dx $
Note that the integral for the [[Cumulative Density Function|CDF]] cannot be solved, since there is no close-form. We rely on approximations from tables or computers.
## Properties of the Gaussian Distribution
**Affine transformation:**
Denotes the process of adding constants, or multiplying by factors. The Gaussian is invariant to these kind of transformations, meaning the affine transformation of a Gaussian returns a Gaussian. ^e84e63
$
\begin{align}
\text{if : }&X &&\sim \mathcal N(\mu, \sigma^2)\\
\text{then : }& a*X+ b &&\sim \mathcal N(a\mu+b, a^2 \sigma^2)
\end{align} $
**Standardization:**
By subtracting the mean, and dividing by standard deviation, we can transform any normal distribution into a standard normal $\mathcal N(0,1)$.
$
\begin{align}
\text{if : }&X &&\sim \mathcal N(\mu, \sigma^2)\\
\text{then : }&Z = \frac{X-\mu}{\sigma} &&\sim \mathcal N(0,1)
\end{align}
$
**Symmetry:**
A Gaussian distribution is symmetric around its mean. When it is centered around zero $(\mu=0)$, multiplying $X$ by $-1$ results in the same distribution. ^4ff648
$
\begin{align}
\text{if : } &X &&\sim \mathcal N(0, \sigma^2)\\
\text{then : } &-X&&\sim\mathcal N(0, \sigma^2)
\end{align}
$
Additionally, the symmetry allows the following relationship for probabilities of absolute values:
$
\begin{aligned}
\mathbf P(\lvert X \rvert>x) &=\mathbf P(X>x) + \mathbf P(X< -x) \\
&=\mathbf P(X>x) + \mathbf P(-X > x) \\ &=2 \mathbf P(X>x)
\end{aligned}
$
## Quantiles
The quantile function relates probabilities to specific values of $X$. For a given quantile $q_\alpha$ the probability that $X$ is *greater than or equal* to $q_\alpha$ is $\alpha$. More intuitively we think of the reverse case where $X <q_\alpha$.
$ \begin{align}
\mathbf P(X \ge q_\alpha)&= \alpha \\
\mathbf P(X\le q_\alpha)&=1- \alpha
\end{align} $
For example the 90th quantile corresponds to $\alpha=0.1$.
$ \begin{align}
\mathbf P(X\ge q_{10})&=0.1 \\
\mathbf P(X\le q_{10})&=0.9
\end{align} $
To compute quantiles, we use the inverse CDF (also called the quantile function):
$q_\alpha=F^{-1}(1-\alpha)$
This involves finding the $x$-value corresponding to a given cumulative probability $1 - \alpha$. In practice, tables or numerical methods are used to evaluate $F^{-1}$.
$ \begin{aligned} F(q_\alpha)&= 1 - \alpha \\ F^{-1}(1- \alpha) &= q_\alpha \end{aligned} $