muse_inference
The Marginal Unbiased Score Expansion (MUSE) method is a generic tool for hierarchical Bayesian inference. MUSE performs approximate marginalization over arbitrary non-Gaussian and high-dimensional latent spaces, providing Gaussianized constraints on hyper parameters of interest. It is much faster than exact methods like Hamiltonian Monte Carlo (HMC), and requires no user input like many Variational Inference (VI), and Likelihood-Free Inference (LFI) or Simulation-Based Inference (SBI) methods. It excels in high-dimensions, which challenge these other methods. It is approximate, so its results may need to be spot-checked against exact methods, but it is itself exact in asymptotic limit of a large number of data modes contributing to each hyperparameter, or in the limit of Gaussian joint likelihood regardless the number of data modes. For more details, see Millea & Seljak, 2021.
MUSE works on standard hierarchical problems, where the likelihood is of the form:
In our notation, \(x\) are the observed variables (the “data”), \(z\) are unobserved “latent” variables, and \(\theta\) are some “hyperparameters” of interest. MUSE is applicable when the goal of the analysis is to estimate the hyperparameters, \(\theta\), but otherwise, the latent variables, \(z\), do not need to be inferred (only marginalized out via the integral above).
The only requirements to run MUSE on a particular problem are that forward simulations from \(\mathcal{P}(x,z\,|\,\theta)\) can be generated, and gradients of the joint likelihood, \(\mathcal{P}(x,z\,|\,\theta)\) with respect to \(z\) and \(\theta\) can be computed. The marginal likelihood is never required, so MUSE could be considered a form of LFI/SBI.
Contents: