Distributed accelerated gradient methods with restart under quadratic growth condition
From MaRDI portal
Publication:6607027
DOI10.1007/S10898-024-01395-ZMaRDI QIDQ6607027
Vishnu Narayanan, Palaniappan Balamurugan, Chhavi Sharma
Publication date: 17 September 2024
Published in: Journal of Global Optimization (Search for Journal in Brave)
distributed optimizationquadratic growth conditionNesterov accelerated gradient methodunconstrained and constrained convex optimization
Cites Work
- Title not available (Why is that?)
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Fast projection onto the simplex and the \(l_1\) ball
- From error bounds to the complexity of first-order descent methods for convex functions
- Global error bounds for piecewise convex polynomials
- Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity
- Linear convergence of first order methods for non-strongly convex optimization
- Decentralized estimation of Laplacian eigenvalues in multi-agent systems
- The rate of convergence of Nesterov's accelerated forward-backward method is actually faster than \(1/k^2\)
- Fast Distributed Gradient Methods
- Numerical Optimization
- RSG: Beating Subgradient Method without Smoothness and Strong Convexity
- Distributed Linearized Alternating Direction Method of Multipliers for Composite Convex Consensus Optimization
- Convergence Rates of Distributed Nesterov-Like Gradient Methods on Random Networks
- A Proximal Gradient Algorithm for Decentralized Composite Optimization
- First-Order Methods in Optimization
- Distributed Subgradient Methods for Multi-Agent Optimization
- Decentralized Accelerated Gradient Methods With Increasing Penalty Parameters
- Convergence Speed in Distributed Consensus and Averaging
- Optimal Algorithms for Non-Smooth Distributed Optimization in Networks
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- Robust Stopping Criteria for Dykstra's Algorithm
- A Decentralized Primal-Dual Method for Constrained Minimization of a Strongly Convex Function
This page was built for publication: Distributed accelerated gradient methods with restart under quadratic growth condition
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6607027)