A parameter $\theta$ is identifiable if the mapping from the parameter space to the distribution $\theta \mapsto \mathbb P_\theta$ is injective. This means that the parameter uniquely determines the probability distribution, and vice versa.
**Formally:**
- Any two distinct parameters $\theta$ and $\theta^\prime$ produce different probability distributions $\mathbb P_\theta, \mathbb P_{\theta'}$
$\theta \not = \theta^\prime \implies \mathbb P_\theta \not = \mathbb P_\theta^\prime$
- If two distributions $\mathbb P_\theta$ and $\mathbb P_{\theta^\prime}$ are the same, their underlying parameters must also be the same:
$ \mathbb P_\theta = \mathbb P_\theta^\prime \implies \theta = \theta^\prime$
**Implications for Model Specification:**
To ensure that the model parameters are meaningful and estimable, we must construct the [[Statistical Model]] with identifiable parameters. Parameters that are not identifiable can lead to ambiguity in statistical inference, as different parameter values might correspond to the same distribution.
**Example:**
$
\begin{align}
\Big(\{0,1\}, \{\text{Ber}(1-\Phi(\frac{\mu}{\sigma})\}_{\mu, \sigma^2 \in \mathbb R \times (0,\infty)}\Big) &\implies \text{non-identifiable} \\
\Big(\{0,1\}, \{\text{Ber}(1-\Phi(\frac{\mu}{\sigma})\}_{\mu / \sigma \in \mathbb R} \Big) &\implies \text{identifiable}
\end{align}
$