Optimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functions
DOI10.1007/s10957-020-01770-2zbMath1468.90085arXiv1803.06600OpenAlexW3095979953MaRDI QIDQ2026726
Donghwan Kim, Jeffrey A. Fessler
Publication date: 20 May 2021
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1803.06600
Analysis of algorithms and problem complexity (68Q25) Semidefinite programming (90C22) Convex programming (90C25) Abstract computational complexity for mathematical programming problems (90C60) Nonlinear programming (90C30) Discrete approximations in optimal control (49M25)
Related Items (12)
Uses Software
Cites Work
- Unnamed Item
- Smooth minimization of non-smooth functions
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Optimized first-order methods for smooth convex minimization
- Smooth strongly convex interpolation and exact worst-case performance of first-order methods
- The exact information-based complexity of smooth convex minimization
- On the convergence analysis of the optimized gradient method
- Information-based complexity of linear operator equations
- Introductory lectures on convex optimization. A basic course.
- Exact worst-case convergence rates of the proximal gradient method for composite convex minimization
- Lower bounds for finding stationary points I
- Efficient first-order methods for convex minimization: a constructive approach
- Performance of first-order methods for smooth convex minimization: a novel approach
- An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and Its Implications to Second-Order Methods
- Graph Implementations for Nonsmooth Convex Programs
- Using SeDuMi 1.02, A Matlab toolbox for optimization over symmetric cones
- Generalizing the Optimized Gradient Method for Smooth Convex Minimization
- Another Look at the Fast Iterative Shrinkage/Thresholding Algorithm (FISTA)
- Primal–dual accelerated gradient methods with small-dimensional relaxation oracle
This page was built for publication: Optimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functions