A Nonmonotone Filter SQP Method: Local Convergence and Numerical Results
DOI10.1137/140996677zbMath1326.49042OpenAlexW1732855436MaRDI QIDQ2949516
Yueling Loh, Nicholas I. M. Gould, Daniel P. Robinson
Publication date: 1 October 2015
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://semanticscholar.org/paper/9e2f8e4597e42fd748f60746330a5007b8abc788
sequential quadratic programmingnonlinear programmingpenalty functionrestorationfilter line search methodnonconvex constrained optimization problems
Numerical mathematical programming methods (65K05) Nonconvex programming, global optimization (90C26) Nonlinear programming (90C30) Numerical optimization and variational techniques (65K10) Numerical methods based on necessary conditions (49M05) Newton-type methods (49M15) Methods of successive quadratic programming type (90C55)
Related Items
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A line search exact penalty method using steering rules
- A primal-dual augmented Lagrangian
- An adaptive augmented Lagrangian method for large-scale constrained optimization
- An interior-point trust-funnel algorithm for nonlinear optimization
- A stabilized SQP method: superlinear convergence
- A nonmonotone filter method for nonlinear optimization
- On the limited memory BFGS method for large scale optimization
- A nonmonotone filter trust region method for nonlinear constrained optimization
- A globally convergent primal-dual interior-point filter method for nonlinear programming
- On the superlinear local convergence of a filter-SQP method
- Numerical experiments with the Lancelot package (Release \(A\)) for large-scale nonlinear optimization
- A class on nonmonotone stabilization methods in unconstrained optimization
- CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization
- Numerical comparison of augmented Lagrangian algorithms for nonconvex problems
- On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming
- An interior algorithm for nonlinear optimization that combines line search and trust region steps
- The Fritz John necessary optimality conditions in the presence of equality and inequality constraints
- A sequential quadratic programming algorithm with an additional equality constrained phase
- A second-derivative SQP method with a 'trust-region-free' predictor step
- An Inexact Sequential Quadratic Optimization Algorithm for Nonlinear Optimization
- A Nonmonotone Filter SQP Method: Local Convergence and Numerical Results
- A Second Derivative SQP Method: Global Convergence
- A Second Derivative SQP Method: Local Convergence and Practical Issues
- Infeasibility Detection and SQP Methods for Nonlinear Optimization
- A Globally Convergent Augmented Lagrangian Algorithm for Optimization with General Constraints and Simple Bounds
- Flexible penalty functions for nonlinear constrained optimization
- A recursive quadratic programming algorithm that uses differentiable exact penalty functions
- The watchdog technique for forcing convergence in algorithms for constrained optimization
- Trust Region Methods
- Exact Penalty Functions in Constrained Optimization
- A Characterization of Superlinear Convergence and Its Application to Quasi-Newton Methods
- On the Global Convergence of a Filter--SQP Algorithm
- A Filter Method with Unified Step Computation for Nonlinear Optimization
- Scalable Nonlinear Programming via Exact Differentiable Penalty Functions and Trust-Region Newton Methods
- An Interior-Point Algorithm for Large-Scale Nonlinear Optimization with Inexact Step Computations
- SNOPT: An SQP Algorithm for Large-Scale Constrained Optimization
- Line Search Filter Methods for Nonlinear Programming: Local Convergence
- A Globally Convergent Stabilized SQP Method
- Benchmarking optimization software with performance profiles.
- Nonlinear programming without a penalty function.