Optimality conditions and a smoothing trust region Newton method for nonlipschitz optimization (Q2866196)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Optimality conditions and a smoothing trust region Newton method for nonlipschitz optimization |
scientific article; zbMATH DE number 6238049
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Optimality conditions and a smoothing trust region Newton method for nonlipschitz optimization |
scientific article; zbMATH DE number 6238049 |
Statements
13 December 2013
0 references
nonsmooth nonconvex optimization
0 references
smoothing methods
0 references
convergence
0 references
regularized optimization
0 references
penalty function
0 references
non-Lipschitz
0 references
trust region Newton method
0 references
Optimality conditions and a smoothing trust region Newton method for nonlipschitz optimization (English)
0 references
The article of Xiaojun Chen, Lingfeng Niu and Yaxiang Yuan is a valuable contribution to the areas of nonsmooth optimization, numerical mathematics, statistics, the theory of Inverse Problems, Multicriteria Decision Making (MCDM) and applied nonlinear functional analysis, as it uses a very wide setting and addresses a quite broad range of potential applications. The paper is well structured, well written and numerically well exemplified.NEWLINENEWLINEThe paper occurred in a time where the scientific toolbox of Mathematical Statistics and from the engineering side, Inverse Problems broadly expands and emerges, especially, due to the insight and need that mixtures between L2- and L1-based approximation methods should be further developed and implemented. In other words, we are on a way to a very prosperous Ridge Regression, Tikhonov Regularization and MCDM. Herewith, quite naturally, smooth and nonsmooth target functions are meeting additively, and it is natural to embed this modelling into a much wider nonlinear and higher-order nondifferentiable setting. From within of continuous optimization, e.g., through the so-called merit functions, but also from nearly any field of applied sciences, motivations can come in order to implement and conduct the results of this article.NEWLINENEWLINEFinally, the smoothing techniques mean an own great research challenge also; they are approached from so many viewpoints and strategies by researchers worldwide. It is very worthwhile that the authors have contributed to those techniques, too.NEWLINENEWLINERegularized minimization problems with nonconvex, nonsmooth, maybe non-Lipschitz penalty functions have gained a remarkable attention in previous years, due to their wide applications in image restoration, signal reconstruction, and variable selection. In this article, the authors derive affine-scaled second-order necessary and sufficient conditions for local minimizers of such minimization problems. Furthermore, they suggest a globally convergent smoothing trust-region Newton procedure which is able to, from any starting point, find a candidate solution satisfying the affine-scaled second-order necessary optimality condition. Numerical examples are presented to illustrate the effectiveness of the new smoothing trust-region Newton technique.NEWLINENEWLINEThe six sections of this paper are as follows: 1. Introduction, 2. Smoothing functions and a smoothing trust region Newton method, 3. Necessary and sufficient optimality conditions, 4. Convergence analysis, 5. Numerical experiments and 6. Conclusion.NEWLINENEWLINEIn fact, in forthcoming years, further strong results and methods might be expected, stimulated by this article. Such emerging advances could foster and initiate additional achievements to science and engineering, from data mining, image processing and inverse problems, to economics, finance and OR, to biology, medicine and healthcare, and, eventually, to the circumstances of live on earth.
0 references