In parametric hypothesis testing we assume that the data comes from a certain distribution family $\mathcal F$ (e.g. [[Gaussian Distribution|Gaussian]]). The [[Hypothesis Tests|Hypothesis Test]] typically concerns a parameter within this distribution, such as $\mu$ or $\sigma$.
$ \begin{cases} H_0: \mu=\mu_0 \\ H_1: \mu \not = \mu_0 \end{cases} $
In goodness-of-fit testing (”GoF”) on the other hand, we want to check if the data comes from a certain parameterized distribution, e.g. $\text{Pois}(3)$ from [[Poisson Distribution]], versus any alternative distribution.
$
\begin{cases}
H_0:\mathbf P=\text{Pois}(\lambda_0) \\
H_1:\mathbf P\not =\text{Pois}(\lambda_0)
\end{cases}
$
While $H_0$ looks similar to the parametric hypothesis test, the $H_1$ can literally be any kind of distribution in a GoF test. Also, the latter focuses on the overall “fit” of the distribution, not so much on parameters of the distribution.