Pages that link to "Item:Q5266533"
From MaRDI portal
The following pages link to On the Convergence Rate of Incremental Aggregated Gradient Algorithms (Q5266533):
Displaying 41 items.
- Convergence rates of subgradient methods for quasi-convex optimization problems (Q782917) (← links)
- An aggregate and iterative disaggregate algorithm with proven optimality in machine learning (Q1689569) (← links)
- On variance reduction for stochastic smooth convex optimization with multiplicative noise (Q1739038) (← links)
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods (Q2023684) (← links)
- Incremental without replacement sampling in nonconvex optimization (Q2046568) (← links)
- Variable smoothing incremental aggregated gradient method for nonsmooth nonconvex regularized optimization (Q2047203) (← links)
- Fully asynchronous policy evaluation in distributed reinforcement learning over networks (Q2063869) (← links)
- Inertial proximal incremental aggregated gradient method with linear convergence guarantees (Q2084299) (← links)
- An accelerated distributed gradient method with local memory (Q2097691) (← links)
- On the convergence of a block-coordinate incremental gradient method (Q2100401) (← links)
- On the convergence analysis of aggregated heavy-ball method (Q2104283) (← links)
- Accelerating incremental gradient optimization with curvature information (Q2181597) (← links)
- Linear convergence of primal-dual gradient methods and their performance in distributed optimization (Q2184550) (← links)
- Linear convergence of cyclic SAGA (Q2193004) (← links)
- An incremental aggregated proximal ADMM for linearly constrained nonconvex optimization with application to sparse logistic regression problems (Q2226322) (← links)
- Primal-dual incremental gradient method for nonsmooth and convex optimization problems (Q2230784) (← links)
- Communication-efficient algorithms for decentralized and stochastic optimization (Q2297648) (← links)
- An inertial parallel and asynchronous forward-backward iteration for distributed convex optimization (Q2322371) (← links)
- Non-asymptotic convergence analysis of inexact gradient methods for machine learning without strong convexity (Q4594841) (← links)
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs (Q4602346) (← links)
- An incremental mirror descent subgradient algorithm with random sweeping and proximal step (Q4613984) (← links)
- Distributed Deterministic Asynchronous Algorithms in Time-Varying Graphs Through Dykstra Splitting (Q4624929) (← links)
- Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods (Q4641660) (← links)
- Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate (Q4641666) (← links)
- Optimization Methods for Large-Scale Machine Learning (Q4641709) (← links)
- GADMM: Fast and Communication Efficient Framework for Distributed Machine Learning (Q4969135) (← links)
- Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions (Q4991666) (← links)
- On Stochastic and Deterministic Quasi-Newton Methods for Nonstrongly Convex Optimization: Asymptotic Convergence and Rate Analysis (Q5107212) (← links)
- Convergence Rate of Incremental Gradient and Incremental Newton Methods (Q5237308) (← links)
- IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate (Q5745078) (← links)
- Linear convergence of proximal incremental aggregated gradient method for nonconvex nonsmooth minimization problems (Q5865360) (← links)
- A distributed accelerated optimization algorithm over time‐varying directed graphs with uncoordinated step‐sizes (Q6078638) (← links)
- An asynchronous subgradient-proximal method for solving additive convex optimization problems (Q6093360) (← links)
- A distributed proximal gradient method with time-varying delays for solving additive convex optimizations (Q6110428) (← links)
- Proximal variable smoothing method for three-composite nonconvex nonsmooth minimization with a linear operator (Q6126596) (← links)
- Heavy-ball-based optimal thresholding algorithms for sparse linear inverse problems (Q6134435) (← links)
- Heavy-ball-based hard thresholding algorithms for sparse signal recovery (Q6137779) (← links)
- Random-reshuffled SARAH does not need full gradient computations (Q6204201) (← links)
- Stochastic subgradient algorithm for nonsmooth nonconvex optimization (Q6578251) (← links)
- Incremental quasi-Newton algorithms for solving a nonconvex, nonsmooth, finite-sum optimization problem (Q6586914) (← links)
- Convergence on thresholding-based algorithms for dictionary-sparse recovery (Q6669595) (← links)