Variance reduction techniques for stochastic proximal point algorithms
From MaRDI portal
Publication:6644264
DOI10.1007/s10957-024-02502-6MaRDI QIDQ6644264
Cheik Traoré, Silvia Villa, Vassilis Apidopoulos, Saverio Salzo
Publication date: 27 November 2024
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Minimizing finite sums with the stochastic average gradient
- Incremental proximal methods for large scale convex optimization
- From error bounds to the complexity of first-order descent methods for convex functions
- Convergence rates for the heavy-ball continuous dynamics for non-convex optimization, under Polyak-Łojasiewicz condition
- Linear convergence of prox-SVRG method for separable non-smooth convex optimization problems under bounded metric subregularity
- On damped second-order gradient systems
- Asymptotic and finite-sample properties of estimators based on stochastic gradients
- Convergence of the forward-backward algorithm: beyond the worst-case with the help of geometry
- Ergodic Convergence of a Stochastic Proximal Point Algorithm
- Large-Scale Machine Learning with Stochastic Gradient Descent
- Nonasymptotic convergence of stochastic proximal point algorithms for constrained convex optimization
- The Proximal Robbins–Monro Method
- Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods
- Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity
- The Łojasiewicz Inequality for Nonsmooth Subanalytic Functions with Applications to Subgradient Dynamical Systems
- Understanding Machine Learning
- Some methods of speeding up the convergence of iteration methods
- A Stochastic Approximation Method
- Convex analysis and monotone operator theory in Hilbert spaces
- A semismooth Newton stochastic proximal point algorithm with variance reduction
This page was built for publication: Variance reduction techniques for stochastic proximal point algorithms