The [[Delta Method]] allows us to make asymptotic statements about $Y$, given we know the asymptotic behavior of $X$ and given that $Y$ can be expressed as a function $g(X)$.
In a multivariate setup, where both $\bar X_n$ and $\mu$ are $\in \mathbb R^d$, and $\Sigma$ is a $(d \times d)$ [[Covariance Matrix]], the [[Multivariate Central Limit Theorem]] tells us the following..
$ \sqrt n \, (\bar X_n- \mu) \xrightarrow[]{}\mathcal N(0, \Sigma) $
The function $g$, which maps $X \mapsto Y$ can be a scalar-valued or vector-valued function with $k$ dimensional output.
$ g(X): \mathbb R^d \mapsto \mathbb R^k $
**Univariate Delta Method:**
We can state a convergence to a Gaussian, where the [[Variance]] is scaled by the squared slope $g^\prime(\theta)$.
$ \sqrt n \big(g(\bar X_n)-g(\theta)\big) \xrightarrow[n \to \infty]{(d)}\mathcal N \Big(0, g^\prime(\theta)^2*\sigma^2\Big) $
**Multivariate Delta Method:**
Instead of the derivative $g^\prime(\theta)$, we need to work with $\nabla g(\theta)$.
$
\sqrt n \big(g(\bar X_n)-g(\theta)\big) \xrightarrow[n \to \infty]{(d)}
\mathcal N \Big(0, \nabla g(\theta)^T*\Sigma* \nabla g(\theta)\Big)
$
When $g$ is a vector-valued function, then $\nabla g$ is a [[Jacobian#Jacobian Matrix|Jacobian Matrix]] $\mathbf J$, where the gradients of each output element $g_i$ are column vectors that are glued together from left to right.
$
\nabla g=\mathbf J= \begin{bmatrix} \frac{\partial g_1}{\partial x_1}& \ldots & \frac{\partial g_k}{\partial x_1} \\
\vdots & \ddots & \vdots \\
\frac{\partial g_1}{\partial x_d}& \ldots & \frac{\partial g_k}{\partial x_d} \end{bmatrix}
$