A stabilized SQP method: global convergence
From MaRDI portal
Publication:4683759
DOI10.1093/imanum/drw004zbMath1433.90162OpenAlexW2346238354MaRDI QIDQ4683759
Vyacheslav Kungurtsev, Daniel P. Robinson, Philip E. Gill
Publication date: 26 September 2018
Published in: IMA Journal of Numerical Analysis (Search for Journal in Brave)
Full work available at URL: https://semanticscholar.org/paper/2f44571ae04106cf46ea977cf5a0e1d78fbdae8a
sequential quadratic programmingnonlinear programmingaugmented LagrangianSQP methodsstabilized SQPprimal-dual methodssecond-order optimality
Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Methods of successive quadratic programming type (90C55)
Related Items
On the componentwise boundedness away from zero of iterates generated by stabilized interior point methods, On scaled stopping criteria for a safeguarded augmented Lagrangian method with theoretical guarantees, A second-order optimality condition with first- and second-order complementarity associated with global convergence of algorithms, A globally convergent regularized interior point method for constrained optimization, Sequential Quadratic Optimization for Nonlinear Optimization Problems on Riemannian Manifolds, Analysis of a new sequential optimality condition applied to mathematical programs with equilibrium constraints, On the convergence analysis of a penalty algorithm for nonsmooth optimization and its performance for solving hard-sphere problems, Some theoretical limitations of second-order algorithms for smooth constrained optimization, A stabilized sequential quadratic semidefinite programming method for degenerate nonlinear semidefinite programs, Second-order enhanced optimality conditions and constraint qualifications, Convergence of a stabilized SQP method for equality constrained optimization, A fast and simple modification of Newton's method avoiding saddle points, A Note on the McCormick Second-Order Constraint Qualification, On the cost of solving augmented Lagrangian subproblems, A regularization method for constrained nonlinear least squares, On the weak second-order optimality condition for nonlinear semidefinite and second-order cone programming, Exploiting negative curvature in deterministic and stochastic optimization, On the fulfillment of the complementary approximate Karush-Kuhn-Tucker conditions and algorithmic applications, A globally convergent Levenberg-Marquardt method for equality-constrained optimization, Primal-dual active-set methods for large-scale optimization, FaRSA for ℓ1-regularized convex optimization: local convergence and numerical experience, Subspace-stabilized sequential quadratic programming, On the Burer-Monteiro method for general semidefinite programs, On Optimality Conditions for Nonlinear Conic Programming, Comments on: Critical Lagrange multipliers: what we currently know about them, how they spoil our lives, and what we can do about it