A simple nearly optimal restart scheme for speeding up first-order methods
From MaRDI portal
Publication:2696573
DOI10.1007/s10208-021-09502-2OpenAlexW3141697399MaRDI QIDQ2696573
Benjamin Grimmer, James Renegar
Publication date: 14 April 2023
Published in: Foundations of Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1803.00151
Related Items
Additive Schwarz methods for convex optimization with backtracking, Fast gradient methods for uniformly convex and weakly smooth problems, Practical perspectives on symplectic accelerated optimization, Optimal Convergence Rates for the Proximal Bundle Method, On optimal universal first-order methods for minimizing heterogeneous sums, General Hölder smooth convergence rates follow from specialized rates assuming growth bounds, NESTANets: stable, accurate and efficient neural networks for analysis-sparse inverse problems, Sharpness, Restart, and Acceleration, Accelerated first-order methods for large-scale convex optimization: nearly optimal complexity under strong convexity
Cites Work
- Smooth minimization of non-smooth functions
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Gradient methods for minimizing composite functions
- First-order algorithm with \({\mathcal{O}(\ln(1/\epsilon))}\) convergence for \({\epsilon}\)-equilibrium in two-person zero-sum games
- Universal gradient methods for convex optimization problems
- Smoothing technique and its applications in semidefinite optimization
- On semi- and subanalytic geometry
- A simplified view of first order methods for optimization
- From error bounds to the complexity of first-order descent methods for convex functions
- Accelerated first-order methods for hyperbolic programming
- Faster subgradient methods for functions with Hölderian growth
- Adaptive restart for accelerated gradient schemes
- Smoothing and First Order Methods: A Unified Framework
- Optimal methods of smooth convex minimization
- On convergence rates of subgradient optimization methods
- Bregman Monotone Optimization Algorithms
- RSG: Beating Subgradient Method without Smoothness and Strong Convexity
- Relatively Smooth Convex Optimization by First-Order Methods, and Applications
- Sharpness, Restart, and Acceleration
- Adaptive restart of accelerated gradient methods under local quadratic growth condition
- The Łojasiewicz Inequality for Nonsmooth Subanalytic Functions with Applications to Subgradient Dynamical Systems
- ``Efficient” Subgradient Methods for General Convex Optimization
- A Descent Lemma Beyond Lipschitz Gradient Continuity: First-Order Methods Revisited and Applications
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item