Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice
From MaRDI portal
Publication:4558545
zbMath1469.68101arXiv1712.05654MaRDI QIDQ4558545
Zaid Harchaoui, Hongzhou Lin, Julien Mairal
Publication date: 22 November 2018
Full work available at URL: https://arxiv.org/abs/1712.05654
Convex programming (90C25) Learning and adaptive systems in artificial intelligence (68T05) Problem solving in the context of artificial intelligence (heuristics, search strategies, etc.) (68T20) Methods of reduced gradient type (90C52)
Related Items (40)
Accelerated proximal algorithms with a correction term for monotone inclusions ⋮ Oracle complexity separation in convex optimization ⋮ On the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate Ascent ⋮ Unnamed Item ⋮ Accelerated meta-algorithm for convex optimization problems ⋮ Bregman Proximal Point Algorithm Revisited: A New Inexact Version and Its Inertial Variant ⋮ Inexact first-order primal-dual algorithms ⋮ One-step optimization method for equilibrium problems ⋮ Inexact successive quadratic approximation for regularized optimization ⋮ Accelerated variance-reduced methods for saddle-point problems ⋮ An accelerated variance reducing stochastic method with Douglas-Rachford splitting ⋮ Principled analyses and design of first-order methods with inexact proximal operators ⋮ Adaptive proximal SGD based on new estimating sequences for sparser ERM ⋮ Accelerated methods for saddle-point problem ⋮ Contracting Proximal Methods for Smooth Convex Optimization ⋮ The Approximate Duality Gap Technique: A Unified Theory of First-Order Methods ⋮ Revisiting EXTRA for Smooth Distributed Optimization ⋮ On variance reduction for stochastic smooth convex optimization with multiplicative noise ⋮ Accelerated proximal point method for maximally monotone operators ⋮ Unnamed Item ⋮ Fast convergence of generalized forward-backward algorithms for structured monotone inclusions ⋮ Fast gradient descent for convex minimization problems with an oracle producing a \(( \delta, L)\)-model of function at the requested point ⋮ An optimal randomized incremental gradient method ⋮ Stochastic quasi-gradient methods: variance reduction via Jacobian sketching ⋮ Distributed Learning with Sparse Communications by Identification ⋮ Provable accelerated gradient method for nonconvex low rank optimization ⋮ An Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton Acceleration ⋮ Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Understanding the acceleration phenomenon via high-resolution differential equations ⋮ Generalized Momentum-Based Methods: A Hamiltonian Perspective ⋮ Unnamed Item ⋮ Unnamed Item ⋮ A Proximal Bundle Variant with Optimal Iteration-Complexity for a Large Range of Prox Stepsizes ⋮ Unnamed Item ⋮ Convergence of Recursive Stochastic Algorithms Using Wasserstein Divergence ⋮ Accelerated proximal envelopes: application to componentwise methods ⋮ On the computational efficiency of catalyst accelerated coordinate descent ⋮ Accelerating variance-reduced stochastic gradient methods
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Gradient methods for minimizing composite functions
- First-order methods of smooth convex optimization with inexact oracle
- Minimizing finite sums with the stochastic average gradient
- Convergence of some algorithms for convex minimization
- Introductory lectures on convex optimization. A basic course.
- An optimal randomized incremental gradient method
- Descentwise inexact proximal algorithms for smooth optimization
- An accelerated inexact proximal point algorithm for convex minimization
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- A UNIFIED FRAMEWORK FOR SOME INEXACT PROXIMAL POINT ALGORITHMS*
- On Lower and Upper Bounds for Smooth and Strongly Convex Optimization Problems
- Optimization with Sparsity-Inducing Penalties
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization
- On the Convergence of the Proximal Point Algorithm for Convex Minimization
- New Proximal Point Algorithms for Convex Minimization
- Monotone Operators and the Proximal Point Algorithm
- Practical Aspects of the Moreau--Yosida Regularization: Theoretical Preliminaries
- Forward-Backward Envelope for the Sum of Two Nonconvex Functions: Further Properties and Nonmonotone Linesearch Algorithms
- Vector Extrapolation Methods with Applications
- Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
- Numerical methods for nondifferentiable convex optimization
- A remark on accelerated block coordinate descent for computing the proximity operators of a sum of convex functions
- Katyusha: the first direct acceleration of stochastic gradient methods
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
- Regularization and Variable Selection Via the Elastic Net
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
This page was built for publication: Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice