Convergence of memory gradient methods
From MaRDI portal
Publication:3518551
DOI10.1080/00207160701466370zbMath1152.65071OpenAlexW1998995344MaRDI QIDQ3518551
Publication date: 8 August 2008
Published in: International Journal of Computer Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/00207160701466370
Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Interior-point methods (90C51)
Cites Work
- Unnamed Item
- Unnamed Item
- A gradient-related algorithm with inexact line searches
- Global convergence of nonmonotone descent methods for unconstrained optimization problems
- On the limited memory BFGS method for large scale optimization
- Supermemory descent methods for unconstrained minimization
- Analysis of monotone gradient methods
- On the convergence of descent algorithms
- On memory gradient method with trust region for unconstrained optimization
- Memory gradient method for the minimization of functions
- R-linear convergence of the Barzilai and Borwein gradient method
- The Barzilai and Borwein Gradient Method for the Large Scale Unconstrained Minimization Problem
- An adaptive conic trust-region method for unconstrained optimization
- Two-Point Step Size Gradient Methods
- Testing Unconstrained Optimization Software
- Global Convergence Properties of Conjugate Gradient Methods for Optimization
- Numerical Optimization
- Nonmonotone Spectral Methods for Large-Scale Nonlinear Systems
- A Modified Trust Region Algorithm
- On the Barzilai and Borwein choice of steplength for the gradient method