Katyusha: the first direct acceleration of stochastic gradient methods
From MaRDI portal
Publication:4978059
DOI10.1145/3055399.3055448zbMath1369.68273arXiv1603.05953OpenAlexW2963607709MaRDI QIDQ4978059
Publication date: 17 August 2017
Published in: Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1603.05953
Numerical optimization and variational techniques (65K10) Learning and adaptive systems in artificial intelligence (68T05) Stochastic programming (90C15)
Related Items
A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization, Unnamed Item, Unnamed Item, Unnamed Item, Optimal Transport-Based Distributionally Robust Optimization: Structural Properties and Iterative Schemes, A regularization interpretation of the proximal point method for weakly convex functions, Cocoercivity, smoothness and bias in variance-reduced stochastic gradient methods, Unnamed Item, Accelerated randomized mirror descent algorithms for composite non-strongly convex optimization, Factor-\(\sqrt{2}\) acceleration of accelerated gradient methods, Unified analysis of stochastic gradient methods for composite convex and smooth optimization, SVRG meets AdaGrad: painless variance reduction, Linearly-convergent FISTA variant for composite optimization with duality, Optimal analysis of method with batching for monotone stochastic finite-sum variational inequalities, An accelerated variance reducing stochastic method with Douglas-Rachford splitting, First-order methods for convex optimization, On the Adaptivity of Stochastic Gradient-Based Optimization, Gradient complexity and non-stationary views of differentially private empirical risk minimization, Unifying framework for accelerated randomized methods in convex optimization, Random-reshuffled SARAH does not need full gradient computations, Stochastic Model-Based Minimization of Weakly Convex Functions, Accelerated dual-averaging primal–dual method for composite convex minimization, Recent theoretical advances in decentralized distributed convex optimization, On variance reduction for stochastic smooth convex optimization with multiplicative noise, A Smooth Inexact Penalty Reformulation of Convex Problems with Linear Constraints, An Optimal Algorithm for Decentralized Finite-Sum Optimization, Stochastic variance reduced gradient methods using a trust-region-like scheme, Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice, Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimization, Unnamed Item, Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods, An accelerated directional derivative method for smooth stochastic convex optimization, Random Gradient Extrapolation for Distributed and Stochastic Optimization, Stochastic quasi-gradient methods: variance reduction via Jacobian sketching, Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems, Restarting the accelerated coordinate descent method with a rough strong convexity estimate, Provable accelerated gradient method for nonconvex low rank optimization, Nesterov-aided stochastic gradient methods using Laplace approximation for Bayesian design optimization, An inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimization, Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization, A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization, Unnamed Item, Unnamed Item, Unnamed Item, A stochastic Nesterov's smoothing accelerated method for general nonsmooth constrained stochastic composite convex optimization, On stochastic accelerated gradient with convergence rate, A hybrid stochastic optimization framework for composite nonconvex optimization