Convergence of stochastic proximal gradient algorithm
From MaRDI portal
Publication:2019902
DOI10.1007/s00245-019-09617-7zbMath1465.90101arXiv1403.5074OpenAlexW2980398138WikidataQ127020378 ScholiaQ127020378MaRDI QIDQ2019902
Silvia Villa, Băng Công Vũ, Lorenzo Rosasco
Publication date: 22 April 2021
Published in: Applied Mathematics and Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1403.5074
Related Items (29)
Asynchronous variance-reduced block schemes for composite non-convex stochastic optimization: block-specific steplengths and adapted batch-sizes ⋮ Stochastic forward-backward splitting for monotone inclusions ⋮ Stochastic block projection algorithms with extrapolation for convex feasibility problems ⋮ Sub-linear convergence of a stochastic proximal iteration method in Hilbert space ⋮ The Stochastic Auxiliary Problem Principle in Banach Spaces: Measurability and Convergence ⋮ Fluorescence image deconvolution microscopy via generative adversarial learning (FluoGAN) ⋮ A dual-based stochastic inexact algorithm for a class of stochastic nonsmooth convex composite problems ⋮ Privacy-preserving federated learning on lattice quantization ⋮ Universal regular conditional distributions via probabilistic transformers ⋮ Unnamed Item ⋮ A new regularized stochastic approximation framework for stochastic inverse problems ⋮ Sharper Bounds for Proximal Gradient Algorithms with Errors ⋮ Analysis of Online Composite Mirror Descent Algorithm ⋮ Unnamed Item ⋮ Maximum Likelihood Estimation of Regularization Parameters in High-Dimensional Inverse Problems: An Empirical Bayesian Approach. Part II: Theoretical Analysis ⋮ Unnamed Item ⋮ On variance reduction for stochastic smooth convex optimization with multiplicative noise ⋮ New nonasymptotic convergence rates of stochastic proximal point algorithm for stochastic convex optimization ⋮ General convergence analysis of stochastic first-order methods for composite optimization ⋮ Dynamical behavior of a stochastic forward-backward algorithm using random monotone operators ⋮ Scalable estimation strategies based on stochastic approximations: classical results and new insights ⋮ Stochastic proximal splitting algorithm for composite minimization ⋮ Stochastic proximal-gradient algorithms for penalized mixed models ⋮ Convergence analysis of the stochastic reflected forward-backward splitting algorithm ⋮ High-performance statistical computing in the computing environments of the 2020s ⋮ SABRINA: a stochastic subspace majorization-minimization algorithm ⋮ A Stochastic Variance Reduced Primal Dual Fixed Point Method for Linearly Constrained Separable Optimization ⋮ Binary quantized network training with sharpness-aware minimization ⋮ Proximal Gradient Methods for Machine Learning and Imaging
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Stochastic forward-backward splitting for monotone inclusions
- An optimal method for stochastic composite optimization
- Proximal methods for the latent group lasso penalty
- A sparsity preserving stochastic gradient methods for sparse regression
- Minimizing finite sums with the stochastic average gradient
- Pegasos: primal estimated sub-gradient solver for SVM
- Dynamical behavior of a stochastic forward-backward algorithm using random monotone operators
- Elastic-net regularization in learning theory
- Modified Fejér sequences and applications
- Stochastic Approximations and Perturbations in Forward-Backward Splitting for Monotone Operators
- Accelerated and Inexact Forward-Backward Algorithms
- Proximal Splitting Methods in Signal Processing
- Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization
- Nonparametric sparsity and regularization
- An optimal algorithm for stochastic strongly-convex optimization
- Robust Stochastic Approximation Approach to Stochastic Programming
- Acceleration of Stochastic Approximation by Averaging
- Gradient Convergence in Gradient methods with Errors
- Optimization Methods for Large-Scale Machine Learning
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Regularization and Variable Selection Via the Elastic Net
- A First-Order Stochastic Primal-Dual Algorithm with Correction Step
- On perturbed proximal gradient algorithms
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms
- A Convergent Incremental Gradient Method with a Constant Step Size
- Prediction, Learning, and Games
- Signal Recovery by Proximal Forward-Backward Splitting
- Understanding Machine Learning
- Stochastic Quasi-Fejér Block-Coordinate Fixed Point Iterations with Random Sweeping
- Stochastic Estimation of the Maximum of a Regression Function
- A Stochastic Approximation Method
- Convex analysis and monotone operator theory in Hilbert spaces
- Structured sparsity through convex optimization
This page was built for publication: Convergence of stochastic proximal gradient algorithm