Parallel Random Coordinate Descent Method for Composite Minimization: Convergence Analysis and Error Bounds
DOI10.1137/130950288zbMath1329.90108arXiv1312.5302OpenAlexW2592062427MaRDI QIDQ3465244
Publication date: 21 January 2016
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1312.5302
rate of convergencepartially separable functionscomposite minimizationgeneralized error bound conditionparallel random coordinate descent algorithm
Numerical mathematical programming methods (65K05) Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Optimality conditions and duality in mathematical programming (90C46) Decomposition methods (49M27)
Related Items (27)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Parallel coordinate descent methods for big data optimization
- Gradient methods for minimizing composite functions
- A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints
- On the complexity analysis of randomized block-coordinate descent methods
- Iteration complexity analysis of block coordinate descent methods
- Interior-point Lagrangian decomposition method for separable convex optimization
- A coordinate gradient descent method for nonsmooth separable minimization
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Introductory lectures on convex optimization. A basic course.
- Bounds for error in the solution set of a perturbed linear program
- Random block coordinate descent methods for linearly constrained optimization over networks
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Distributed Coordinate Descent Method for Learning with Big Data
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- Rate Analysis of Inexact Dual First-Order Methods Application to Dual Decomposition
- Randomized Methods for Linear Constraints: Convergence Rates and Conditioning
- Parallel Random Coordinate Descent Method for Composite Minimization: Convergence Analysis and Error Bounds
- Incremental Stochastic Subgradient Algorithms for Convex Optimization
- Variational Analysis
- Random Coordinate Descent Algorithms for Multi-Agent Convex Optimization Over Networks
- Non-Lipschitz $\ell_{p}$-Regularization and Box Constrained Model for Image Restoration
- On the Convergence of Block Coordinate Descent Type Methods
This page was built for publication: Parallel Random Coordinate Descent Method for Composite Minimization: Convergence Analysis and Error Bounds