Convergent stepsizes for constrained optimization algorithms
From MaRDI portal
Publication:1061005
DOI10.1007/BF00939251zbMath0568.90078MaRDI QIDQ1061005
Publication date: 1986
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37)
Related Items (6)
Switching stepsize strategies for sequential quadratic programming ⋮ A constrained min-max algorithm for rival models of the same economic system ⋮ An interior-point algorithm for nonlinear minimax problems ⋮ Respecifying the weighting matrix of a quadratic objective function ⋮ Globally convergent interior-point algorithm for nonlinear programming ⋮ Equality and inequality constrained optimization algorithms with convergent stepsizes
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A robust secant method for optimization problems with inequality constraints
- A globally convergent, implementable multiplier method with automatic penalty limitation
- Projection methods in constrained optimisation and applications to optimal policy decisions
- A globally convergent method for nonlinear programming
- Diagonalized multiplier methods and quasi-Newton methods for constrained optimization
- A feasible direction algorithm for convex optimization: Global convergence rates
- Minimization of functions having Lipschitz continuous first partial derivatives
- Newton’s Method and the Goldstein Step-Length Rule for Constrained Minimization Problems
- Reduced quasi-Newton methods with feasibility improvement for nonlinearly constrained optimization
- A surperlinearly convergent algorithm for constrained optimization problems
- Global and Asymptotic Convergence Rate Estimates for a Class of Projected Gradient Processes
- On the Local Convergence of Quasi-Newton Methods for Constrained Optimization
- Nonlinear programming via an exact penalty function: Asymptotic analysis
- Quasi-Newton Methods, Motivation and Theory
- Superlinearly convergent quasi-newton algorithms for nonlinearly constrained optimization problems
- Superlinearly convergent variable metric algorithms for general nonlinear programming problems
- Some examples of cycling in variable metric methods for constrained minimization
- Rates of Convergence for Conditional Gradient Algorithms Near Singular and Nonsingular Extremals
- A general saddle point result for constrained optimization
- Projected Newton Methods for Optimization Problems with Simple Constraints
- Convex programming in Hilbert space
This page was built for publication: Convergent stepsizes for constrained optimization algorithms