Convergence of algorithms in optimization and solutions of nonlinear equations
From MaRDI portal
Publication:848721
DOI10.1007/s10957-009-9583-7zbMath1183.90386OpenAlexW1982425125MaRDI QIDQ848721
Publication date: 5 March 2010
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10957-009-9583-7
unconstrained optimizationconvergenceLyapunov functionrates of convergenceNewton methodsteepest descent methodsolutions of equations
Related Items (5)
Convergence and stability of line search methods for unconstrained optimization ⋮ On a two-phase approximate greatest descent method for nonlinear optimization with equality constraints ⋮ Newton methods to solve a system of nonlinear algebraic equations ⋮ Explicit pseudo-transient continuation and the trust-region updating strategy for unconstrained optimization ⋮ On the bang-bang control approach via a component-wise line search strategy for unconstrained optimization
Cites Work
- Unnamed Item
- A gradient-related algorithm with inexact line searches
- Algorithms for unconstrained optimization problems via control theory
- A. M. Lyapunov's stability theory—100 years on
- Continuous state feedback guaranteeing uniform ultimate boundedness for uncertain dynamic systems
- Global Convergence Properties of Conjugate Gradient Methods for Optimization
- Global convergence of some differential equation algorithms for solving equations involving positive variables
- Numerical Optimization
- Recent Advances in Liapunov Stability Theory
- Stability of Difference Equations and Convergence of Iterative Processes
This page was built for publication: Convergence of algorithms in optimization and solutions of nonlinear equations