A New Class of Incremental Gradient Methods for Least Squares Problems
From MaRDI portal
Publication:4376150
DOI10.1137/S1052623495287022zbMath0887.49025MaRDI QIDQ4376150
Publication date: 10 February 1998
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Numerical solutions to overdetermined systems, pseudoinverses (65F20) Numerical mathematical programming methods (65K05) Numerical optimization and variational techniques (65K10) Mathematical programming (90C99)
Related Items
Communication-reducing algorithm of distributed least mean square algorithm with neighbor-partial diffusion, Steered sequential projections for the inconsistent convex feasibility problem, A stochastic successive minimization method for nonsmooth nonconvex optimization with applications to transceiver design in wireless communication networks, Distributed multi-task classification: a decentralized online learning approach, Parallel stochastic gradient algorithms for large-scale matrix completion, Distributed adaptive clustering learning over time-varying multitask networks, An incremental primal-dual method for nonlinear programming with special structure, Incremental subgradient algorithms with dynamic step sizes for separable convex optimizations, Projected Nonlinear Least Squares for Exponential Fitting, A cyclic iterative approach and its modified version to solve coupled Sylvester-transpose matrix equations, Distributed event-triggered adaptive partial diffusion strategy under dynamic network topology, Random algorithms for convex minimization problems, Incremental proximal methods for large scale convex optimization, On the application of iterative methods of nondifferentiable optimization to some problems of approximation theory, String-averaging incremental stochastic subgradient algorithms, Why random reshuffling beats stochastic gradient descent, Adaptive clustering based on element-wised distance for distributed estimation over multi-task networks, A Smooth Inexact Penalty Reformulation of Convex Problems with Linear Constraints, Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate, Block‐iterative algorithms, Minimizing finite sums with the stochastic average gradient, An incremental subgradient method on Riemannian manifolds, Cyclic and simultaneous iterative methods to matrix equations of the form \(A_iXB_i=F_i\), Incrementally updated gradient methods for constrained and regularized optimization, The Kaczmarz algorithm, row action methods, and statistical learning algorithms, Applications of convex separable unconstrained nonsmooth optimization to numerical approximation with respect to l1- and l∞-norms, Incremental without replacement sampling in nonconvex optimization, Generalized row-action methods for tomographic imaging, Value and Policy Function Approximations in Infinite-Horizon Optimization Problems, Approximation schemes for functional optimization problems, Error stability properties of generalized gradient-type algorithms, Convergence Rate of Incremental Gradient and Incremental Newton Methods, On perturbed steepest descent methods with inexact line search for bilevel convex optimization, An adaptive Polyak heavy-ball method, A globally convergent incremental Newton method