Fast and safe: accelerated gradient methods with optimality certificates and underestimate sequences
DOI10.1007/s10589-021-00269-4zbMath1473.90115arXiv1710.03695OpenAlexW3154674907MaRDI QIDQ2044479
Majid Jahani, Chenxin Ma, Martin Takáč, Rachael Tappenden, Naga Venkata C. Gudapati
Publication date: 9 August 2021
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1710.03695
lower boundsstrongly convexcomposite minimizationsmooth minimizationaccelerated algorithmsestimate sequencequadratic averagingunderestimate sequence
Analysis of algorithms and problem complexity (68Q25) Convex programming (90C25) Minimax problems in mathematical programming (90C47)
Related Items (1)
Uses Software
Cites Work
- Unnamed Item
- Smooth minimization of non-smooth functions
- Inexact coordinate descent: complexity and preconditioning
- Gradient methods for minimizing composite functions
- On the complexity analysis of randomized block-coordinate descent methods
- Minimizing finite sums with the stochastic average gradient
- Accelerating the cubic regularization of Newton's method on convex problems
- Introductory lectures on convex optimization. A basic course.
- A flexible coordinate descent method
- Adaptive restart for accelerated gradient schemes
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- Accelerated, Parallel, and Proximal Coordinate Descent
- An Optimal First Order Method Based on Optimal Quadratic Averaging
- The Approximate Duality Gap Technique: A Unified Theory of First-Order Methods
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- A Stochastic Approximation Method
- An accelerated communication-efficient primal-dual optimization framework for structured machine learning
This page was built for publication: Fast and safe: accelerated gradient methods with optimality certificates and underestimate sequences