Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
From MaRDI portal
Publication:5405257
zbMath1307.68073arXiv1209.1873MaRDI QIDQ5405257
Tong Zhang, Shai Shalev-Shwartz
Publication date: 1 April 2014
Full work available at URL: https://arxiv.org/abs/1209.1873
optimizationcomputational complexityridge regressionlogistic regressionsupport vector machinesstochastic dual coordinate ascentregularized loss minimization
Analysis of algorithms and problem complexity (68Q25) Learning and adaptive systems in artificial intelligence (68T05)
Related Items
Stochastic distributed learning with gradient quantization and double-variance reduction, Lower bounds for non-convex stochastic optimization, A stochastic variance reduced gradient using Barzilai-Borwein techniques as second order information, Cluster‐based gradient method for stochastic optimal control problems with elliptic partial differential equation constraint, Adaptive coordinate sampling for stochastic primal–dual optimization, SVRG meets AdaGrad: painless variance reduction, Learning with risks based on M-location, Randomized Block Proximal Damped Newton Method for Composite Self-Concordant Minimization, Importance sampling in signal processing applications, Block mirror stochastic gradient method for stochastic optimization, Accelerating stochastic sequential quadratic programming for equality constrained optimization using predictive variance reduction, Local linear convergence of proximal coordinate descent algorithm, A Method with Convergence Rates for Optimization Problems with Variational Inequality Constraints, Two Symmetrized Coordinate Descent Methods Can Be $O(n^2)$ Times Slower Than the Randomized Version, An Optimal Algorithm for Decentralized Finite-Sum Optimization, Stochastic proximal quasi-Newton methods for non-convex composite optimization, Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization, Riemannian Stochastic Variance Reduced Gradient Algorithm with Retraction and Vector Transport, Unnamed Item, Unnamed Item, An accelerated communication-efficient primal-dual optimization framework for structured machine learning, A generic coordinate descent solver for non-smooth convex optimisation, Convergence properties of a randomized primal-dual algorithm with applications to parallel MRI, Accelerating mini-batch SARAH by step size rules, On data preconditioning for regularized loss minimization, Block-coordinate and incremental aggregated proximal gradient methods for nonsmooth nonconvex problems, Accelerated, Parallel, and Proximal Coordinate Descent, The Cyclic Block Conditional Gradient Method for Convex Optimization Problems, On the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate Ascent, An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization, Inexact coordinate descent: complexity and preconditioning, A flexible coordinate descent method, On optimal probabilities in stochastic coordinate descent methods, A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization, Finite-sum smooth optimization with SARAH, Distributed Block Coordinate Descent for Minimizing Partially Separable Functions, Tensor Canonical Correlation Analysis With Convergence and Statistical Guarantees, Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization, A New Homotopy Proximal Variable-Metric Framework for Composite Convex Minimization, Proximal average approximated incremental gradient descent for composite penalty regularized empirical risk minimization, Unnamed Item, Unnamed Item, On the Convergence of Stochastic Primal-Dual Hybrid Gradient, The complexity of primal-dual fixed point methods for ridge regression, Linear convergence rate for the MDM algorithm for the nearest point problem, Sketched Newton--Raphson, Optimizing Adaptive Importance Sampling by Stochastic Approximation, Communication-efficient distributed multi-task learning with matrix sparsity regularization, Improving kernel online learning with a snapshot memory, Cocoercivity, smoothness and bias in variance-reduced stochastic gradient methods, Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization, Linear convergence of cyclic SAGA, Generalized forward-backward splitting with penalization for monotone inclusion problems, On the Efficiency of Random Permutation for ADMM and Coordinate Descent, A Randomized Nonmonotone Block Proximal Gradient Method for a Class of Structured Nonlinear Programming, Unnamed Item, An accelerated variance reducing stochastic method with Douglas-Rachford splitting, Inexact proximal stochastic gradient method for convex composite optimization, Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory, Analyzing random permutations for cyclic coordinate descent, Variance reduction for root-finding problems, Dual block-coordinate forward-backward algorithm with application to deconvolution and deinterlacing of video sequences, Accelerated stochastic variance reduction for a class of convex optimization problems, Avoiding Communication in Primal and Dual Block Coordinate Descent Methods, Extended ADMM and BCD for nonseparable convex minimization models with quadratic coupling terms: convergence analysis and insights, Principal component projection with low-degree polynomials, Worst-case complexity of cyclic coordinate descent: \(O(n^2)\) gap with randomized version, Near-optimal discrete optimization for experimental design: a regret minimization approach, Efficient random coordinate descent algorithms for large-scale structured nonconvex optimization, Forward-reflected-backward method with variance reduction, A Randomized Coordinate Descent Method with Volume Sampling, A Randomized Exchange Algorithm for Computing Optimal Approximate Designs of Experiments, On the complexity of parallel coordinate descent, Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate, Optimization Methods for Large-Scale Machine Learning, A novel Frank-Wolfe algorithm. Analysis and applications to large-scale SVM training, A Coordinate-Descent Primal-Dual Algorithm with Large Step Size and Possibly Nonseparable Functions, Unnamed Item, Minimizing finite sums with the stochastic average gradient, Stochastic gradient method with Barzilai-Borwein step for unconstrained nonlinear optimization, Nonasymptotic convergence of stochastic proximal point algorithms for constrained convex optimization, Convergence of stochastic proximal gradient algorithm, Point process estimation with Mirror Prox algorithms, Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimization, Analysis of biased stochastic gradient descent using sequential semidefinite programs, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods, An optimal randomized incremental gradient method, Random Gradient Extrapolation for Distributed and Stochastic Optimization, Parallel decomposition methods for linearly constrained problems subject to simple bound with application to the SVMs training, Randomized smoothing variance reduction method for large-scale non-smooth convex optimization, Negotiating multicollinearity with spike-and-slab priors, Stochastic quasi-gradient methods: variance reduction via Jacobian sketching, Concentration inequalities for sampling without replacement, A stochastic subspace approach to gradient-free optimization in high dimensions, Fastest rates for stochastic mirror descent methods, Unnamed Item, Inverse optimization approach to the identification of electricity consumer models, Improved asynchronous parallel optimization analysis for stochastic incremental methods, Markov chain block coordinate descent, Provable accelerated gradient method for nonconvex low rank optimization, Differentially Private Distributed Learning, Stochastic DCA for minimizing a large sum of DC functions with application to multi-class logistic regression, Optimization in High Dimensions via Accelerated, Parallel, and Proximal Coordinate Descent, Adaptive Sampling for Incremental Optimization Using Stochastic Gradient Descent, Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization, A General Distributed Dual Coordinate Optimization Framework for Regularized Loss Minimization, Efficient learning with robust gradient descent, Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent, Distributed block-diagonal approximation methods for regularized empirical risk minimization, High-dimensional model recovery from random sketched data by exploring intrinsic sparsity, A stochastic alternating direction method of multipliers for non-smooth and non-convex optimization, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Proximal Gradient Methods for Machine Learning and Imaging, A hybrid stochastic optimization framework for composite nonconvex optimization