Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
From MaRDI portal
Publication:4636997
zbMath1440.62314arXiv1409.3257MaRDI QIDQ4636997
No author found.
Publication date: 17 April 2018
Full work available at URL: https://arxiv.org/abs/1409.3257
computational complexityrandomized algorithmsempirical risk minimizationprimal-dual algorithmsconvex-concave saddle point problems
Minimax procedures in statistical decision theory (62C20) Stochastic programming (90C15) Stochastic approximation (62L20) Applications of operator theory in optimization, convex analysis, mathematical programming, economics (47N10)
Related Items
A new randomized primal-dual algorithm for convex optimization with fast last iterate convergence rates, Convergence properties of a randomized primal-dual algorithm with applications to parallel MRI, An \(O(s^r)\)-resolution ODE framework for understanding discrete-time algorithms and applications to the linear convergence of minimax problems, On the Convergence of Stochastic Primal-Dual Hybrid Gradient, Communication-efficient distributed multi-task learning with matrix sparsity regularization, Cyclic Coordinate Dual Averaging with Extrapolation, Adaptive coordinate sampling for stochastic primal–dual optimization, Robust Accelerated Primal-Dual Methods for Computing Saddle Points, Inexact proximal stochastic gradient method for convex composite optimization, A review on deep learning in medical image reconstruction, Primal-dual block-proximal splitting for a class of non-convex problems, Accelerated dual-averaging primal–dual method for composite convex minimization, Inexact proximal stochastic second-order methods for nonconvex composite optimization, Primal-dual incremental gradient method for nonsmooth and convex optimization problems, Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice, Point process estimation with Mirror Prox algorithms, Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimization, Unnamed Item, Unnamed Item, Unnamed Item, Iterative pre-conditioning for expediting the distributed gradient-descent method: the case of linear least-squares problem, Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization, Block-proximal methods with spatially adapted acceleration, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Accelerating variance-reduced stochastic gradient methods
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Smooth minimization of non-smooth functions
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Parallel coordinate descent methods for big data optimization
- Gradient methods for minimizing composite functions
- Erratum to: ``Minimizing finite sums with the stochastic average gradient
- A unified primal-dual algorithm framework based on Bregman iteration
- Incremental proximal methods for large scale convex optimization
- Introductory lectures on convex optimization. A basic course.
- An optimal randomized incremental gradient method
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Coordinate descent algorithms
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Incremental Subgradient Methods for Nondifferentiable Optimization
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- Large-Scale Machine Learning with Stochastic Gradient Descent
- An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization
- Robust Stochastic Approximation Approach to Stochastic Programming
- A general purpose unequal probability sampling plan
- Splitting Algorithms for the Sum of Two Nonlinear Operators
- Acceleration of Stochastic Approximation by Averaging
- An Incremental Gradient(-Projection) Method with Momentum Term and Adaptive Stepsize Rule
- Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice
- Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
- 10.1162/153244302760200704
- Katyusha: the first direct acceleration of stochastic gradient methods
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- A Convergent Incremental Gradient Method with a Constant Step Size
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
- Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm