Random Gradient Extrapolation for Distributed and Stochastic Optimization
DOI10.1137/17M1157891zbMath1401.90156arXiv1711.05762OpenAlexW2963373496WikidataQ129143221 ScholiaQ129143221MaRDI QIDQ4687240
No author found.
Publication date: 11 October 2018
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1711.05762
stochastic optimizationrandomized methodfinite-sum optimizationdistributed machine learninggradient extrapolation
Semidefinite programming (90C22) Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Numerical methods based on nonlinear programming (49M37)
Related Items (10)
Uses Software
Cites Work
- Unnamed Item
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- An optimal method for stochastic composite optimization
- Minimizing finite sums with the stochastic average gradient
- Primal-dual first-order methods with \({\mathcal {O}(1/\varepsilon)}\) iteration-complexity for cone programming
- A randomized Kaczmarz algorithm with exponential convergence
- An iterative row-action method for interval convex programming
- Introductory lectures on convex optimization. A basic course.
- An Asynchronous Mini-Batch Algorithm for Regularized Stochastic Optimization
- Robust Stochastic Approximation Approach to Stochastic Programming
- Proximal Minimization Methods with Generalized Bregman Functions
- Bregman Monotone Optimization Algorithms
- Stochastic Dual Averaging for Decentralized Online Optimization on Time-Varying Communication Graphs
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Katyusha: the first direct acceleration of stochastic gradient methods
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- Optimal Distributed Online Prediction using Mini-Batches
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms
- A Convergent Incremental Gradient Method with a Constant Step Size
- Interior Gradient and Proximal Methods for Convex and Conic Optimization
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
This page was built for publication: Random Gradient Extrapolation for Distributed and Stochastic Optimization