Convergence of algorithms for perturbed optimization problems
From MaRDI portal
Publication:2641227
DOI10.1007/BF02055200zbMath0721.90080MaRDI QIDQ2641227
Publication date: 1990
Published in: Annals of Operations Research (Search for Journal in Brave)
discretizationperturbationquasi-Newton methodsconditional gradient methodgradient projection methodInfinite-dimensional optimization
Sensitivity, stability, well-posedness (49K40) Sensitivity, stability, parametric optimization (90C31) Programming in abstract spaces (90C48) Computational methods for problems pertaining to operations research and mathematical programming (90-08)
Related Items
Gradient-Based Solution Algorithms for a Class of Bilevel Optimization and Optimal Control Problems with a Nonsmooth Lower Level ⋮ Numerical methods for nonlinear equations
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Rates of convergence for adaptive Newton methods
- Approximate quasi-Newton methods
- The effect of perturbations on the convergence rates of optimization algorithms
- Extremal types for certain \(L^ p \)minimization problems and associated large scale nonlinear programs
- Newton's method for singular constrained optimization problems
- A pointwise quasi-Newton method for unconstrained optimal control problems
- Efficient dynamic programming implementations of Newton's method for unconstrained optimal control problems
- Mesh independence of Newton-like methods for infinite dimensional problems
- Newton-Goldstein convergence rates for convex constrained minimization problems with singular solutions
- Diagonally Modified Conditional Gradient Methods for Input Constrained Optimal Control Problems
- A Quasi-Newton Method for Elliptic Boundary Value Problems
- A Mesh-Independence Principle for Operator Equations and Their Discretizations
- Broyden's method in Hilbert space
- The Local Convergence of Broyden-Like Methods on Lipschitzian Problems in Hilbert Spaces
- Quasi-Newton Methods and Unconstrained Optimal Control Problems
- Newton’s Method and the Goldstein Step-Length Rule for Constrained Minimization Problems
- Global and Asymptotic Convergence Rate Estimates for a Class of Projected Gradient Processes
- Fast Algorithms for Compact Fixed Point Problems with Inexact Function Evaluations
- Perturbed Kuhn-Tucker points and rates of convergence for a class of nonlinear-programming algorithms
- Sensitivity analysis for nonlinear programming using penalty methods
- Rates of Convergence for Conditional Gradient Algorithms Near Singular and Nonsingular Extremals