A Convergent Incremental Gradient Method with a Constant Step Size
From MaRDI portal
Publication:5444279
DOI10.1137/040615961zbMath1154.90015OpenAlexW1988795359MaRDI QIDQ5444279
Hillel Gauchman, Doron Blatt, Alfred O. III Hero
Publication date: 25 February 2008
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/040615961
convergence analysisneural networkslogistic regressionboostingsensor networksincremental gradient method
Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37)
Related Items (57)
New strong convergence theorems for split variational inclusion problems in Hilbert spaces ⋮ An incremental decomposition method for unconstrained optimization ⋮ Accelerated and Instance-Optimal Policy Evaluation with Linear Function Approximation ⋮ GADMM: Fast and Communication Efficient Framework for Distributed Machine Learning ⋮ Distributed Nash equilibrium seeking: a gossip-based algorithm ⋮ Convergence analysis of iterative methods for nonsmooth convex optimization over fixed point sets of quasi-nonexpansive mappings ⋮ Distributed multi-task classification: a decentralized online learning approach ⋮ Zeroth-Order Regularized Optimization (ZORO): Approximately Sparse Gradients and Adaptive Sampling ⋮ Approximation accuracy, gradient methods, and error bound for structured convex optimization ⋮ Subsampled nonmonotone spectral gradient methods ⋮ Accelerating incremental gradient optimization with curvature information ⋮ A second-order accelerated neurodynamic approach for distributed convex optimization ⋮ A framework for parallel second order incremental optimization algorithms for solving partially separable problems ⋮ Distributed stochastic subgradient projection algorithms for convex optimization ⋮ An asynchronous subgradient-proximal method for solving additive convex optimization problems ⋮ A distributed proximal gradient method with time-varying delays for solving additive convex optimizations ⋮ Linear convergence of cyclic SAGA ⋮ Multi-cluster distributed optimization via random sleep strategy ⋮ On the Efficiency of Random Permutation for ADMM and Coordinate Descent ⋮ Proximal variable smoothing method for three-composite nonconvex nonsmooth minimization with a linear operator ⋮ Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning ⋮ Hierarchical constrained consensus algorithm over multi-cluster networks ⋮ Incremental proximal methods for large scale convex optimization ⋮ An incremental aggregated proximal ADMM for linearly constrained nonconvex optimization with application to sparse logistic regression problems ⋮ String-averaging incremental stochastic subgradient algorithms ⋮ Algorithms and Convergence Theorems for Mixed Equilibrium Problems in Hilbert Spaces ⋮ Primal-dual incremental gradient method for nonsmooth and convex optimization problems ⋮ Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods ⋮ Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate ⋮ Optimization Methods for Large-Scale Machine Learning ⋮ Minimizing finite sums with the stochastic average gradient ⋮ Incrementally updated gradient methods for constrained and regularized optimization ⋮ IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate ⋮ Convergence of stochastic proximal gradient algorithm ⋮ Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods ⋮ The Averaged Kaczmarz Iteration for Solving Inverse Problems ⋮ Random Gradient Extrapolation for Distributed and Stochastic Optimization ⋮ Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications ⋮ Generalized row-action methods for tomographic imaging ⋮ Variable smoothing incremental aggregated gradient method for nonsmooth nonconvex regularized optimization ⋮ Incremental subgradient method for nonsmooth convex optimization with fixed point constraints ⋮ Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization ⋮ Convergence Rate of Incremental Gradient and Incremental Newton Methods ⋮ Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions ⋮ On perturbed steepest descent methods with inexact line search for bilevel convex optimization ⋮ A randomized incremental primal-dual method for decentralized consensus optimization ⋮ Incremental proximal gradient scheme with penalization for constrained composite convex optimization problems ⋮ Inertial proximal incremental aggregated gradient method with linear convergence guarantees ⋮ Stochastic average gradient algorithm for multirate FIR models with varying time delays using self‐organizing maps ⋮ Primal-dual stochastic distributed algorithm for constrained convex optimization ⋮ On the convergence of a block-coordinate incremental gradient method ⋮ Linear convergence of proximal incremental aggregated gradient method for nonconvex nonsmooth minimization problems ⋮ A globally convergent incremental Newton method ⋮ On the Convergence Rate of Incremental Aggregated Gradient Algorithms ⋮ Bregman Finito/MISO for Nonconvex Regularized Finite Sum Minimization without Lipschitz Gradient Continuity ⋮ Proximal Gradient Methods for Machine Learning and Imaging ⋮ Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
This page was built for publication: A Convergent Incremental Gradient Method with a Constant Step Size