On the Convergence Rate of Incremental Aggregated Gradient Algorithms
From MaRDI portal
Publication:5266533
DOI10.1137/15M1049695zbMath1366.90195arXiv1506.02081OpenAlexW3104398353MaRDI QIDQ5266533
Pablo A. Parrilo, Mert Gürbüzbalaban, Asuman Ozdaglar
Publication date: 16 June 2017
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1506.02081
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Nonlinear programming (90C30)
Related Items
GADMM: Fast and Communication Efficient Framework for Distributed Machine Learning, Accelerating incremental gradient optimization with curvature information, A distributed accelerated optimization algorithm over time‐varying directed graphs with uncoordinated step‐sizes, An asynchronous subgradient-proximal method for solving additive convex optimization problems, On Stochastic and Deterministic Quasi-Newton Methods for Nonstrongly Convex Optimization: Asymptotic Convergence and Rate Analysis, A distributed proximal gradient method with time-varying delays for solving additive convex optimizations, Linear convergence of cyclic SAGA, Proximal variable smoothing method for three-composite nonconvex nonsmooth minimization with a linear operator, Heavy-ball-based optimal thresholding algorithms for sparse linear inverse problems, Heavy-ball-based hard thresholding algorithms for sparse signal recovery, Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs, Random-reshuffled SARAH does not need full gradient computations, An incremental mirror descent subgradient algorithm with random sweeping and proximal step, Distributed Deterministic Asynchronous Algorithms in Time-Varying Graphs Through Dykstra Splitting, An incremental aggregated proximal ADMM for linearly constrained nonconvex optimization with application to sparse logistic regression problems, Primal-dual incremental gradient method for nonsmooth and convex optimization problems, On variance reduction for stochastic smooth convex optimization with multiplicative noise, Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods, Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate, Optimization Methods for Large-Scale Machine Learning, IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate, Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods, Incremental without replacement sampling in nonconvex optimization, Variable smoothing incremental aggregated gradient method for nonsmooth nonconvex regularized optimization, Communication-efficient algorithms for decentralized and stochastic optimization, Fully asynchronous policy evaluation in distributed reinforcement learning over networks, Convergence Rate of Incremental Gradient and Incremental Newton Methods, Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions, An inertial parallel and asynchronous forward-backward iteration for distributed convex optimization, Convergence rates of subgradient methods for quasi-convex optimization problems, Inertial proximal incremental aggregated gradient method with linear convergence guarantees, Linear convergence of proximal incremental aggregated gradient method for nonconvex nonsmooth minimization problems
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Incremental gradient algorithms with stepsizes bounded away from zero
- The incremental Gauss-Newton algorithm with adaptive stepsize rule
- Introductory lectures on convex optimization. A basic course.
- Why random reshuffling beats stochastic gradient descent
- Incrementally updated gradient methods for constrained and regularized optimization
- A globally convergent incremental Newton method
- Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints
- An Incremental Gradient(-Projection) Method with Momentum Term and Adaptive Stepsize Rule
- Incremental Least Squares Methods and the Extended Kalman Filter
- Convergence Rate of Incremental Gradient and Incremental Newton Methods
- A Convergent Incremental Gradient Method with a Constant Step Size
- On‐line learning for very large data sets