Gradient-proximal method of solving of extreme inclusions (Q2722322)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Gradient-proximal method of solving of extreme inclusions |
scientific article; zbMATH DE number 1617658
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Gradient-proximal method of solving of extreme inclusions |
scientific article; zbMATH DE number 1617658 |
Statements
12 July 2001
0 references
algorithmic fixed point theory
0 references
extreme mapping
0 references
numerical functional analysis
0 references
splitting
0 references
pseudo-symmetric
0 references
skew-symmetric
0 references
gradient-proximal method
0 references
gradient prediction-type method
0 references
proximal prediction-type method
0 references
monotone convergence
0 references
variational inequality
0 references
norm-monotonically
0 references
0.92912304
0 references
0.91577834
0 references
0.9156289
0 references
0.8963666
0 references
Gradient-proximal method of solving of extreme inclusions (English)
0 references
The article under review is a valuable contribution to the algorithmic fixed point theory of an extreme mapping, i.e., to numerical functional analysis and optimization. Let \(\Omega\) be a closed, convex subset of \(\mathbb{R}^n\), \(\Phi: \mathbb{R}^n\times \mathbb{R}^n\to \mathbb{R}\), and \(w: \mathbb{R}^n\to{\mathcal P}(\Omega)\) be the set-valued mapping \(w(v):= \text{Argmin}\{\Phi(v,w)\mid w\in\Omega\}\). Herewith, our fixed point problem asks for a \(v^*\in\Omega\) such that \(v^*\in w(v^*)\).NEWLINENEWLINENEWLINEThe author gives several motivations for the study of this problem: 1. saddle point problems, 2. \(N\) person games with Nash equilibria, 3. inverse optimization problems, 4. variational inequalities, and 5. computing fixed points of operators. His basic preparation for presenting the algorithm consists in his concept of splitting: \(\Phi= S+K\), where \(S\) is pseudo-symmetric and \(K\) is skew-symmetric. Both of the latter two conditions are introduced and analyzed with care. In particular, any differentiable \(\Phi\) (nonuniquely) admits such a splitting (uniqueness being guaranteed when turning to two more narrow classes of functions).NEWLINENEWLINENEWLINENow, the author's gradient-proximal method directly exploits the splitting, and it is a twofold generalization of known concepts. Namely, in the special cases where \(K= 0\) or \(S= 0\), the new method turns out to be the gradient prediction-type method or the proximal prediction-type method, respectively. Let the potential \(P(v)\) of \(S(v,w)\) a convex and \(K(v,w)\) be convex in \(w\). Then, the first main result states (under suitable parameter constellation) monotone convergence to a solution of a related variational inequality and, under additional convexity of \(\Phi(\cdot,w)\) \((w\in\Omega)\) with respect to \(v\in\Omega\), then the method converges norm-monotonically to the required equilibrium. The author also proves, concerning the convexity assumptions, that the roles of \(u\) and \(w\) can just be reverted. This is the contents of the second main result of this clearly written and organized research article.NEWLINENEWLINEFor the entire collection see [Zbl 0958.00047].
0 references