Sublinear Convergence of a Tamed Stochastic Gradient Descent Method in Hilbert Space
From MaRDI portal
Publication:5093647
DOI10.1137/21M1427450MaRDI QIDQ5093647
Tony Stillfjord, Monika Eisenmann
Publication date: 29 July 2022
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2106.09286
Numerical optimization and variational techniques (65K10) Stochastic programming (90C15) Applications of functional analysis in optimization, convex analysis, mathematical programming, economics (46N10) Numerical solution to inverse problems in abstract spaces (65J22)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Euler approximations with varying coefficients: the case of superlinearly growing diffusion coefficients
- Divergence of the multilevel Monte Carlo Euler method for nonlinear stochastic differential equations
- Strong convergence of an explicit numerical method for SDEs with nonglobally Lipschitz continuous coefficients
- Incremental proximal methods for large scale convex optimization
- Dynamical behavior of a stochastic forward-backward algorithm using random monotone operators
- A note on tamed Euler approximations
- Stochastic proximal splitting algorithm for composite minimization
- Sub-linear convergence of a stochastic proximal iteration method in Hilbert space
- On a perturbation theory and on strong convergence rates for stochastic ordinary and partial differential equations with nonglobally monotone coefficients
- The tamed unadjusted Langevin algorithm
- Higher order Langevin Monte Carlo algorithm
- Asymptotic and finite-sample properties of estimators based on stochastic gradients
- Explicit stabilised gradient descent for faster strongly convex optimisation
- Ergodic Convergence of a Stochastic Proximal Point Algorithm
- Numerical approximations of stochastic differential equations with non-globally Lipschitz continuous coefficients
- Strong and weak divergence in finite time of Euler's method for stochastic differential equations with non-globally Lipschitz continuous coefficients
- Convex integral functionals
- On the Internal Stability of Explicit,m-Stage Runge-Kutta Methods for Largem-Values
- Nonasymptotic convergence of stochastic proximal point algorithms for constrained convex optimization
- Stochastic Model-Based Minimization of Weakly Convex Functions
- Optimization Methods for Large-Scale Machine Learning
- Proximal-Proximal-Gradient Method
- Solving Ordinary Differential Equations II
- Snake: A Stochastic Proximal Gradient Algorithm for Regularized Problems Over Large Graphs
- Deep Learning: An Introduction for Applied Mathematicians
- Second order Chebyshev methods based on orthogonal polynomials
- Explicit Runge-Kutta methods for parabolic partial differential equations
- Scalable estimation strategies based on stochastic approximations: classical results and new insights