Generalizing the Optimized Gradient Method for Smooth Convex Minimization
From MaRDI portal
Publication:4571883
DOI10.1137/17M112124XzbMath1400.90245arXiv1607.06764WikidataQ129601844 ScholiaQ129601844MaRDI QIDQ4571883
Jeffrey A. Fessler, Donghwan Kim
Publication date: 3 July 2018
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1607.06764
Analysis of algorithms and problem complexity (68Q25) Semidefinite programming (90C22) Convex programming (90C25) Abstract computational complexity for mathematical programming problems (90C60) Nonlinear programming (90C30) Discrete approximations in optimal control (49M25)
Related Items (14)
Accelerated proximal algorithms with a correction term for monotone inclusions ⋮ Adaptive restart of the optimized gradient method for convex optimization ⋮ Fast proximal algorithms for nonsmooth convex optimization ⋮ Primal–dual accelerated gradient methods with small-dimensional relaxation oracle ⋮ Potential Function-Based Framework for Minimizing Gradients in Convex and Min-Max Optimization ⋮ Fast gradient methods for uniformly convex and weakly smooth problems ⋮ Factor-\(\sqrt{2}\) acceleration of accelerated gradient methods ⋮ Another Look at the Fast Iterative Shrinkage/Thresholding Algorithm (FISTA) ⋮ Operator Splitting Performance Estimation: Tight Contraction Factors and Optimal Parameter Selection ⋮ Tight Sublinear Convergence Rate of the Proximal Point Algorithm for Maximal Monotone Inclusion Problems ⋮ Accelerated proximal point method for maximally monotone operators ⋮ Optimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functions ⋮ Nearly optimal first-order methods for convex optimization under gradient norm measure: an adaptive regularization approach ⋮ Generalized Momentum-Based Methods: A Hamiltonian Perspective
Uses Software
Cites Work
- Smooth minimization of non-smooth functions
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Optimized first-order methods for smooth convex minimization
- An optimal variant of Kelley's cutting-plane method
- On the convergence of the iterates of the ``fast iterative shrinkage/thresholding algorithm
- Smooth strongly convex interpolation and exact worst-case performance of first-order methods
- The exact information-based complexity of smooth convex minimization
- On the convergence analysis of the optimized gradient method
- Information-based complexity of linear operator equations
- Introductory lectures on convex optimization. A basic course.
- On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions
- Performance of first-order methods for smooth convex minimization: a novel approach
- Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity
- Iteration complexity analysis of dual first-order methods for conic convex programming
- A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights
- An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and Its Implications to Second-Order Methods
- Double Smoothing Technique for Large-Scale Linearly Constrained Convex Optimization
- Graph Implementations for Nonsmooth Convex Programs
- Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints
- New Proximal Point Algorithms for Convex Minimization
- Using SeDuMi 1.02, A Matlab toolbox for optimization over symmetric cones
- Another Look at the Fast Iterative Shrinkage/Thresholding Algorithm (FISTA)
- Exact Worst-Case Performance of First-Order Methods for Composite Convex Optimization
This page was built for publication: Generalizing the Optimized Gradient Method for Smooth Convex Minimization