A second-derivative SQP method with a `trust-region-free' predictor step (Q2882358)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: A second-derivative SQP method with a `trust-region-free' predictor step |
scientific article; zbMATH DE number 6030231
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | A second-derivative SQP method with a `trust-region-free' predictor step |
scientific article; zbMATH DE number 6030231 |
Statements
4 May 2012
0 references
nonlinear programming
0 references
nonlinear inequality constraints
0 references
sequential quadratic programming
0 references
\(\ell_{1}\)-penalty function
0 references
nonsmooth optimization
0 references
global and local superlinear convergence
0 references
numerical experiments
0 references
A second-derivative SQP method with a `trust-region-free' predictor step (English)
0 references
The authors consider the solution of the problem NEWLINE\[NEWLINE\underset{x\in \mathbb{R}^n}{}{\text{minimize}}\;\phi(x)= f(x)+ \sigma\|[c(x)^-\|,NEWLINE\]NEWLINE where \(\sigma\) is a positive scalar known as the penalty parameter and \([c(x)]= \min(0,c(x))\) and prove that a second-derivative sequential quadratic programming method is globally and locally superlinear convergent under common assumptions.NEWLINENEWLINE Solutions of this problem correspond under certain assumptions to solutions of the nonlinear programming problem NEWLINE\[NEWLINE\underset{x\in \mathbb{R}^n}{}{\text{minimize}}\;f(x)\quad\text{subject to }c(x)\geq 0.NEWLINE\]NEWLINE Numerical experiments are given.
0 references