A Convergent Incremental Gradient Method with a Constant Step Size

From MaRDI portal
Publication:5444279

DOI10.1137/040615961zbMath1154.90015OpenAlexW1988795359MaRDI QIDQ5444279

Hillel Gauchman, Doron Blatt, Alfred O. III Hero

Publication date: 25 February 2008

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1137/040615961




Related Items (57)

New strong convergence theorems for split variational inclusion problems in Hilbert spacesAn incremental decomposition method for unconstrained optimizationAccelerated and Instance-Optimal Policy Evaluation with Linear Function ApproximationGADMM: Fast and Communication Efficient Framework for Distributed Machine LearningDistributed Nash equilibrium seeking: a gossip-based algorithmConvergence analysis of iterative methods for nonsmooth convex optimization over fixed point sets of quasi-nonexpansive mappingsDistributed multi-task classification: a decentralized online learning approachZeroth-Order Regularized Optimization (ZORO): Approximately Sparse Gradients and Adaptive SamplingApproximation accuracy, gradient methods, and error bound for structured convex optimizationSubsampled nonmonotone spectral gradient methodsAccelerating incremental gradient optimization with curvature informationA second-order accelerated neurodynamic approach for distributed convex optimizationA framework for parallel second order incremental optimization algorithms for solving partially separable problemsDistributed stochastic subgradient projection algorithms for convex optimizationAn asynchronous subgradient-proximal method for solving additive convex optimization problemsA distributed proximal gradient method with time-varying delays for solving additive convex optimizationsLinear convergence of cyclic SAGAMulti-cluster distributed optimization via random sleep strategyOn the Efficiency of Random Permutation for ADMM and Coordinate DescentProximal variable smoothing method for three-composite nonconvex nonsmooth minimization with a linear operatorIncremental Majorization-Minimization Optimization with Application to Large-Scale Machine LearningHierarchical constrained consensus algorithm over multi-cluster networksIncremental proximal methods for large scale convex optimizationAn incremental aggregated proximal ADMM for linearly constrained nonconvex optimization with application to sparse logistic regression problemsString-averaging incremental stochastic subgradient algorithmsAlgorithms and Convergence Theorems for Mixed Equilibrium Problems in Hilbert SpacesPrimal-dual incremental gradient method for nonsmooth and convex optimization problemsGlobal Convergence Rate of Proximal Incremental Aggregated Gradient MethodsSurpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence RateOptimization Methods for Large-Scale Machine LearningMinimizing finite sums with the stochastic average gradientIncrementally updated gradient methods for constrained and regularized optimizationIQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence RateConvergence of stochastic proximal gradient algorithmMomentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methodsThe Averaged Kaczmarz Iteration for Solving Inverse ProblemsRandom Gradient Extrapolation for Distributed and Stochastic OptimizationStochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging ApplicationsGeneralized row-action methods for tomographic imagingVariable smoothing incremental aggregated gradient method for nonsmooth nonconvex regularized optimizationIncremental subgradient method for nonsmooth convex optimization with fixed point constraintsStochastic Primal-Dual Coordinate Method for Regularized Empirical Risk MinimizationConvergence Rate of Incremental Gradient and Incremental Newton MethodsProximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth ConditionsOn perturbed steepest descent methods with inexact line search for bilevel convex optimizationA randomized incremental primal-dual method for decentralized consensus optimizationIncremental proximal gradient scheme with penalization for constrained composite convex optimization problemsInertial proximal incremental aggregated gradient method with linear convergence guaranteesStochastic average gradient algorithm for multirate FIR models with varying time delays using self‐organizing mapsPrimal-dual stochastic distributed algorithm for constrained convex optimizationOn the convergence of a block-coordinate incremental gradient methodLinear convergence of proximal incremental aggregated gradient method for nonconvex nonsmooth minimization problemsA globally convergent incremental Newton methodOn the Convergence Rate of Incremental Aggregated Gradient AlgorithmsBregman Finito/MISO for Nonconvex Regularized Finite Sum Minimization without Lipschitz Gradient ContinuityProximal Gradient Methods for Machine Learning and ImagingRestricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization




This page was built for publication: A Convergent Incremental Gradient Method with a Constant Step Size