Convergence of conjugate gradient methods with constant stepsizes
From MaRDI portal
Publication:3096886
DOI10.1080/10556781003721042zbMath1227.49040OpenAlexW2050163982MaRDI QIDQ3096886
Publication date: 15 November 2011
Published in: Optimization Methods and Software (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/10556781003721042
unconstrained optimizationglobal convergenceconjugate gradient methodnonconvexdescent propertymethod of shortest residuals
Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37)
Related Items (3)
Computational efficiency of the simplex embedding method in convex nondifferentiable optimization ⋮ Method of conjugate subgradients with constrained memory ⋮ Global optimization through a stochastic perturbation of the Polak-Ribière conjugate gradient method
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Global convergence of the method of shortest residuals
- On the method of shortest residuals for unconstrained optimization
- Minimization of functions having Lipschitz continuous first partial derivatives
- Descent Property and Global Convergence of the Fletcher—Reeves Method with Inexact Line Search
- Global Convergence Properties of Conjugate Gradient Methods for Optimization
- Restart procedures for the conjugate gradient method
- On the convergence of conjugate gradient algorithms
- Convergence Properties of Nonlinear Conjugate Gradient Methods
- Convergence properties of the Fletcher-Reeves method
- Function minimization by conjugate gradients
- The conjugate gradient method in extremal problems
- Global convergence of conjugate gradient methods without line search
This page was built for publication: Convergence of conjugate gradient methods with constant stepsizes