Efficiency of coordinate descent methods on huge-scale optimization problems (Q2910875)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Efficiency of coordinate descent methods on huge-scale optimization problems |
scientific article; zbMATH DE number 6081227
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Efficiency of coordinate descent methods on huge-scale optimization problems |
scientific article; zbMATH DE number 6081227 |
Statements
12 September 2012
0 references
ordinate relaxation
0 references
Efficiency of coordinate descent methods on huge-scale optimization problems (English)
0 references
For the unconstrained minimization of a differentiable convex function \(f(x_{1},\dots,x_{n})\) (\(x_{i}\in \mathbb{R}^{n_{i}}\), \(i=1,\dots,n\)) with a globally Lipschitz gradient, the random coordinate descent method consists in randomly choosing, at iteration \(k\), some \(i_{k}\in \{1,\dots,n\}\) and then updating the current iterate by making a step in the direction of the negative of the partial gradient with respect to \(x_{i_{k}}\), the step size being equal to the reciprocal of the Lipschitz constant of this partial gradient. The expected objective function value is shown to converge to the infimum of \(f\); for strongly convex functions, the rate of convergence is linear and an accelerated version is presented. A modification of the method for constrained problems is also introduced. Implementation issues are discussed, and some preliminary numerical experiments are reported.
0 references