Pages that link to "Item:Q5444279"
From MaRDI portal
The following pages link to A Convergent Incremental Gradient Method with a Constant Step Size (Q5444279):
Displaying 50 items.
- New strong convergence theorems for split variational inclusion problems in Hilbert spaces (Q264502) (← links)
- An incremental decomposition method for unconstrained optimization (Q272371) (← links)
- Distributed Nash equilibrium seeking: a gossip-based algorithm (Q311956) (← links)
- Convergence analysis of iterative methods for nonsmooth convex optimization over fixed point sets of quasi-nonexpansive mappings (Q312694) (← links)
- Minimizing finite sums with the stochastic average gradient (Q517295) (← links)
- Approximation accuracy, gradient methods, and error bound for structured convex optimization (Q607498) (← links)
- Distributed stochastic subgradient projection algorithms for convex optimization (Q620442) (← links)
- Incremental proximal methods for large scale convex optimization (Q644913) (← links)
- Generalized row-action methods for tomographic imaging (Q742852) (← links)
- Distributed multi-task classification: a decentralized online learning approach (Q1640564) (← links)
- Convergence of stochastic proximal gradient algorithm (Q2019902) (← links)
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods (Q2023684) (← links)
- Variable smoothing incremental aggregated gradient method for nonsmooth nonconvex regularized optimization (Q2047203) (← links)
- Inertial proximal incremental aggregated gradient method with linear convergence guarantees (Q2084299) (← links)
- On the convergence of a block-coordinate incremental gradient method (Q2100401) (← links)
- Subsampled nonmonotone spectral gradient methods (Q2178981) (← links)
- Accelerating incremental gradient optimization with curvature information (Q2181597) (← links)
- Linear convergence of cyclic SAGA (Q2193004) (← links)
- Hierarchical constrained consensus algorithm over multi-cluster networks (Q2200562) (← links)
- An incremental aggregated proximal ADMM for linearly constrained nonconvex optimization with application to sparse logistic regression problems (Q2226322) (← links)
- Primal-dual incremental gradient method for nonsmooth and convex optimization problems (Q2230784) (← links)
- Incrementally updated gradient methods for constrained and regularized optimization (Q2251572) (← links)
- Primal-dual stochastic distributed algorithm for constrained convex optimization (Q2334189) (← links)
- A globally convergent incremental Newton method (Q2349125) (← links)
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization (Q2355319) (← links)
- A framework for parallel second order incremental optimization algorithms for solving partially separable problems (Q2419531) (← links)
- Multi-cluster distributed optimization via random sleep strategy (Q2423910) (← links)
- Incremental subgradient method for nonsmooth convex optimization with fixed point constraints (Q2829569) (← links)
- On perturbed steepest descent methods with inexact line search for bilevel convex optimization (Q3112499) (← links)
- String-averaging incremental stochastic subgradient algorithms (Q4631774) (← links)
- Algorithms and Convergence Theorems for Mixed Equilibrium Problems in Hilbert Spaces (Q4631914) (← links)
- Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization (Q4636997) (← links)
- Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods (Q4641660) (← links)
- Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate (Q4641666) (← links)
- Optimization Methods for Large-Scale Machine Learning (Q4641709) (← links)
- The Averaged Kaczmarz Iteration for Solving Inverse Problems (Q4686928) (← links)
- Random Gradient Extrapolation for Distributed and Stochastic Optimization (Q4687240) (← links)
- Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications (Q4687241) (← links)
- GADMM: Fast and Communication Efficient Framework for Distributed Machine Learning (Q4969135) (← links)
- Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions (Q4991666) (← links)
- A randomized incremental primal-dual method for decentralized consensus optimization (Q4995044) (← links)
- Incremental proximal gradient scheme with penalization for constrained composite convex optimization problems (Q4999758) (← links)
- Stochastic average gradient algorithm for multirate FIR models with varying time delays using self‐organizing maps (Q5003430) (← links)
- Proximal Gradient Methods for Machine Learning and Imaging (Q5028165) (← links)
- Zeroth-Order Regularized Optimization (ZORO): Approximately Sparse Gradients and Adaptive Sampling (Q5072595) (← links)
- On the Efficiency of Random Permutation for ADMM and Coordinate Descent (Q5108265) (← links)
- Convergence Rate of Incremental Gradient and Incremental Newton Methods (Q5237308) (← links)
- Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning (Q5254990) (← links)
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms (Q5266533) (← links)
- IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate (Q5745078) (← links)