Non-asymptotic guarantees for sampling by stochastic gradient descent
From MaRDI portal
Publication:2290072
DOI10.3103/S1068362319020031zbMath1436.62044arXiv1811.00781OpenAlexW2964311113MaRDI QIDQ2290072
Publication date: 27 January 2020
Published in: Journal of Contemporary Mathematical Analysis. Armenian Academy of Sciences (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1811.00781
Sampling theory, sample surveys (62D05) Monte Carlo methods (65C05) Asymptotic approximations, asymptotic expansions (steepest descent, etc.) (41A60)
Cites Work
- Exponential convergence of Langevin distributions and their discrete approximations
- Langevin diffusions and Metropolis-Hastings algorithms
- Geometric ergodicity of Metropolis algorithms
- Optimal scaling and diffusion limits for the Langevin algorithm in high dimensions
- Langevin-type models. I: Diffusions with given stationary distributions and their discretizations
- High-dimensional Bayesian inference via the unadjusted Langevin algorithm
- Nonasymptotic convergence analysis for the unadjusted Langevin algorithm
- Rate of convergence for ergodic continuous Markov processes: Lyapunov versus Poincaré
- Recursive Stochastic Algorithms for Global Optimization in $\mathbb{R}^d $
- Optimal Scaling of Discrete Approximations to Langevin Diffusions
- Optimization Methods for Large-Scale Machine Learning
- Theoretical Guarantees for Approximate Sampling from Smooth and Log-Concave Densities