Non-asymptotic convergence bounds for modified tamed unadjusted Langevin algorithm in non-convex setting
DOI10.1016/j.jmaa.2024.128892MaRDI QIDQ6640899
Could not fetch data.
Publication date: 20 November 2024
Published in: Journal of Mathematical Analysis and Applications (Search for Journal in Brave)
high-dimensional samplingLangevin SDEnon-asymptotic convergence boundsmodified tamed unadjusted Langevin algorithmsuper-linearly growing diffusion coefficients
Monte Carlo methods (65C05) Stochastic ordinary differential equations (aspects of stochastic analysis) (60H10)
Cites Work
- Title not available (Why is that?)
- Strong convergence of an explicit numerical method for SDEs with nonglobally Lipschitz continuous coefficients
- A note on tamed Euler approximations
- Laplace's method revisited: Weak convergence of probability measures
- An introduction to MCMC for machine learning
- The tamed unadjusted Langevin algorithm
- User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient
- High-dimensional Bayesian inference via the unadjusted Langevin algorithm
- Nonasymptotic convergence analysis for the unadjusted Langevin algorithm
- Ergodicity for SDEs and approximations: locally Lipschitz vector fields and degenerate noise.
- Nonasymptotic bounds for sampling algorithms without log-concavity
- Strong and weak divergence in finite time of Euler's method for stochastic differential equations with non-globally Lipschitz continuous coefficients
- Quantitative Harris-type theorems for diffusions and McKean–Vlasov processes
- On Stochastic Gradient Langevin Dynamics with Dependent Data Streams: The Fully Nonconvex Case
- Partial differential equations and stochastic methods in molecular dynamics
- Theoretical Guarantees for Approximate Sampling from Smooth and Log-Concave Densities
- MCMC methods for functions: modifying old algorithms to make them faster
- Taming Neural Networks with TUSLA: Nonconvex Learning via Adaptive Stochastic Gradient Langevin Algorithms
- Rapid convergence of the unadjusted Langevin algorithm: isoperimetry suffices
- Kinetic Langevin MCMC sampling without gradient Lipschitz continuity -- the strongly convex case
Related Items (1)
This page was built for publication: Non-asymptotic convergence bounds for modified tamed unadjusted Langevin algorithm in non-convex setting
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6640899)