Efficient Bayes inference in neural networks through adaptive importance sampling
From MaRDI portal
Publication:6082841
DOI10.1016/j.jfranklin.2023.08.044arXiv2210.00993MaRDI QIDQ6082841
Jean-Christophe Pesquet, Víctor Elvira, Yunshi Huang, Emilie Chouzenoux
Publication date: 30 October 2023
Published in: Journal of the Franklin Institute (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2210.00993
Cites Work
- Langevin diffusions and Metropolis-Hastings algorithms
- Langevin incremental mixture importance sampling
- Convergence rates for optimised adaptive importance samplers
- Generalized multiple importance sampling
- Layered adaptive importance sampling
- Adaptive Multiple Importance Sampling
- An Adaptive Population Importance Sampler: Learning From Uncertainty
- Riemann Manifold Langevin and Hamiltonian Monte Carlo Methods
- Efficient Bayesian Computation by Proximal Markov Chain Monte Carlo: When Langevin Meets Moreau
- Majorize–Minimize Adapted Metropolis–Hastings Algorithm
- Unnamed Item
- Unnamed Item
This page was built for publication: Efficient Bayes inference in neural networks through adaptive importance sampling