Modified Fejér sequences and applications
From MaRDI portal
Publication:1790672
DOI10.1007/s10589-017-9962-1zbMath1427.90267arXiv1510.04641OpenAlexW2768082890MaRDI QIDQ1790672
Ding-Xuan Zhou, Silvia Villa, Lorenzo Rosasco, Jun Hong Lin
Publication date: 2 October 2018
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1510.04641
Related Items
Parallel random block-coordinate forward-backward algorithm: a unified convergence analysis ⋮ The Stochastic Auxiliary Problem Principle in Banach Spaces: Measurability and Convergence ⋮ Federated learning for minimizing nonsmooth convex loss functions ⋮ The Variable Metric Forward-Backward Splitting Algorithm Under Mild Differentiability Assumptions ⋮ Analysis of Online Composite Mirror Descent Algorithm ⋮ Convergence of stochastic proximal gradient algorithm ⋮ Introduction to the special issue for SIMAI 2016 ⋮ Stochastic proximal-gradient algorithms for penalized mixed models
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Stochastic forward-backward splitting for monotone inclusions
- Incremental proximal methods for large scale convex optimization
- Consistent learning by composite proximal thresholding
- Convex functions, monotone operators and differentiability.
- Variable metric quasi-Fejér monotonicity
- Linear convergence of iterative soft-thresholding
- Ergodic convergence to a zero of the sum of monotone operators in Hilbert space
- On the projected subgradient method for nonsmooth convex optimization in a Hilbert space
- On the convergence of conditional \(\varepsilon\)-subgradient methods for convex programs and convex-concave saddle-point problems.
- Introductory lectures on convex optimization. A basic course.
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- On proximal subgradient splitting method for minimizing the sum of two nonsmooth convex functions
- Incremental Subgradient Methods for Nondifferentiable Optimization
- A stochastic inertial forward–backward splitting algorithm for multivariate monotone inclusions
- Iterative Regularization for Learning with Convex Loss Functions
- Scaling Techniques for $\epsilon$-Subgradient Methods
- Proximal Splitting Methods in Signal Processing
- Convergence Rate Analysis of the Forward-Douglas-Rachford Splitting Scheme
- On the Numerical Solution of Heat Conduction Problems in Two and Three Space Variables
- Splitting Algorithms for the Sum of Two Nonlinear Operators
- Applications of a Splitting Algorithm to Decomposition in Convex Programming and Variational Inequalities
- On convergence rates of subgradient optimization methods
- Convergence Rates in Forward--Backward Splitting
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- A First-Order Stochastic Primal-Dual Algorithm with Correction Step
- The Variable Metric Forward-Backward Splitting Algorithm Under Mild Differentiability Assumptions
- Signal Recovery by Proximal Forward-Backward Splitting
- Stochastic Quasi-Fejér Block-Coordinate Fixed Point Iterations with Random Sweeping
- Convex analysis and monotone operator theory in Hilbert spaces