Is there an analog of Nesterov acceleration for gradient-based MCMC?
From MaRDI portal
Publication:2040101
DOI10.3150/20-BEJ1297zbMath1475.62123arXiv1902.00996OpenAlexW3163522496MaRDI QIDQ2040101
Michael I. Jordan, Xiang Cheng, Yi-An Ma, Niladri S. Chatterji, Nicolas Flammarion, Bartlett, Peter L.
Publication date: 9 July 2021
Published in: Bernoulli (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1902.00996
Bayesian inference (62F15) Monte Carlo methods (65C05) Discrete-time Markov processes on general state spaces (60J05)
Related Items
Projected Wasserstein Gradient Descent for High-Dimensional Bayesian Inference, Complexity of zigzag sampling algorithm for strongly log-concave distributions, Birth–death dynamics for sampling: global convergence, approximations and their asymptotics, On explicit \(L^2\)-convergence rate estimate for underdamped Langevin dynamics, Exponential entropy dissipation for weakly self-consistent Vlasov-Fokker-Planck equations, The entropy production of stationary diffusions, Gradient-Based Markov Chain Monte Carlo for Bayesian Inference With Non-differentiable Priors, Uniform-in-time propagation of chaos for kinetic mean field Langevin dynamics, Tail probability estimates of continuous-time simulated annealing processes, Unnamed Item, High-dimensional MCMC with a standard splitting scheme for the underdamped Langevin diffusion, Unnamed Item, Unnamed Item, Accelerated information gradient flow, Efficient stochastic optimisation by unadjusted Langevin Monte Carlo. Application to maximum marginal likelihood and empirical Bayesian estimation
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The Zig-Zag Process and Super-Efficient Sampling for Bayesian Analysis of Big Data
- The Bouncy Particle Sampler: A Non-Reversible Rejection-Free Markov Chain Monte Carlo Method
- Improving the convergence of reversible samplers
- Convex functions on non-convex domains
- Langevin diffusions and Metropolis-Hastings algorithms
- Introductory lectures on convex optimization. A basic course.
- Generalization of an inequality by Talagrand and links with the logarithmic Sobolev inequality
- A variational principle for the Kramers equation with unbounded external forces
- On sampling from a log-concave density using kinetic Langevin diffusions
- User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient
- High-dimensional Bayesian inference via the unadjusted Langevin algorithm
- Irreversible samplers from jump and continuous Markov processes
- Adaptive restart for accelerated gradient schemes
- Nonasymptotic convergence analysis for the unadjusted Langevin algorithm
- Acceleration of convergence to equilibrium in Markov chains by breaking detailed balance
- Coupling and convergence for Hamiltonian Monte Carlo
- Adaptive Thermostats for Noisy Gradient Systems
- Conservative-dissipative approximation schemes for a generalized Kramers equation
- Hypocoercivity
- Logarithmic Sobolev Inequalities
- The Variational Formulation of the Fokker--Planck Equation
- Riemann Manifold Langevin and Hamiltonian Monte Carlo Methods
- A variational perspective on accelerated methods in optimization
- Extension of Convex Function
- Exponential Convergence to Equilibrium for Kinetic Fokker-Planck Equations
- Sampling can be faster than optimization
- Theoretical Guarantees for Approximate Sampling from Smooth and Log-Concave Densities
- Optimal Transport
- A function space HMC algorithm with second order Langevin diffusion limit