On Secant Updates for Use in General Constrained Optimization
From MaRDI portal
Publication:3807007
DOI10.2307/2008585zbMath0657.90088OpenAlexW4252781672MaRDI QIDQ3807007
Publication date: 1988
Full work available at URL: https://doi.org/10.2307/2008585
successive quadratic programmingaugmented Lagrangiansecant methodsequality-constrained optimizationlocal q-superlinear convergence
Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Quadratic programming (90C20) Newton-type methods (49M15)
Related Items (13)
A convergent secant method for constrained optimization ⋮ Convergence theory for the structured BFGS secant method with an application to nonlinear least squares ⋮ Convergence rate of the augmented Lagrangian SQP method ⋮ Augmented penalty algorithms based on BFGS secant approximations and trust regions ⋮ Superlinearly convergent exact penalty methods with projected structured secant updates for constrained nonlinear least squares ⋮ A Structured Quasi-Newton Algorithm for Optimizing with Incomplete Hessian Information ⋮ A constrained min-max algorithm for rival models of the same economic system ⋮ Numerical algorithms for constrained maximum likelihood estimation ⋮ Nonmonotonic projected algorithm with both trust region and line search for constrained optimization ⋮ Orthogonal and conjugate basis methods for solving equality constrained minimization problems ⋮ Adaptive algorithm for constrained least-squares problems ⋮ Equality and inequality constrained optimization algorithms with convergent stepsizes ⋮ Exploiting additional structure in equality constrained optimization by structured SQP secant algorithms
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Diagonalized multiplier methods and quasi-Newton methods for constrained optimization
- Properties of updating methods for the multipliers in augmented Lagrangians
- Convergence Theorems for Least-Change Secant Update Methods
- On the Local Convergence of a Quasi-Newton Method for the Nonlinear Programming Problem
- Projected Hessian Updating Algorithms for Nonlinearly Constrained Optimization
- A Convergence Theory for a Class of Quasi-Newton Methods for Constrained Optimization
- On the Local Convergence of Quasi-Newton Methods for Constrained Optimization
- Quasi-Newton Methods, Motivation and Theory
- Dual Variable Metric Algorithms for Constrained Optimization
- Superlinearly convergent variable metric algorithms for general nonlinear programming problems
- On the Local and Superlinear Convergence of Quasi-Newton Methods
This page was built for publication: On Secant Updates for Use in General Constrained Optimization