When $n$ is too small to use [[Central Limit Theorem]], we can still make statements about the uncertainty of a [[Random Variable]]. The Hoeffding inequality gives a maximum possible probability for a sample mean $\bar X_n$ to deviate from the [[Expectation]] by more than $\epsilon$.
**Statement:**
For a sequence of $n$ [[Independence and Identical Distribution|i.i.d.]] r.v's $X_1, \dots , X_n$, each bounded within the interval $[a,b]$:
$
\mathbf{P}\left(\left|\bar{X}_ n-\mathbb E[X]\right|\ge \epsilon \right)
\le 2 \exp \left(-\frac{2n\epsilon ^2}{(b-a)^2}\right)\quad \forall\epsilon >0
$
**Assumptions:**
The Hoeffding Inequality relies on the following conditions.
| Description | Formula |
| ------------------------------------------- | ---------------------------------------------- |
| r.v. needs to be i.i.d. | $X_1, X_2, \dots , X_n \stackrel{iid}{\sim }X$ |
| r.v. needs to be bounded between two values | $\mathbf P(X \not \in [a,b])=0$ |
| Positive number of observations | $n>0$ |
Note that the assumption of bounded random variables makes the inequality inapplicable for unbounded distributions, such as Gaussians.
**Exponential Decay:**
The probability of the inequality decays exponentially with increasing $n$. This is good since we want small probabilities for extreme outcomes.
**Other Inequalities:**
Also see [[Markov Inequality]] and [[Chebyshev Inequality]], setting simpler but looser bounds for the probability of extreme events.