Gradient algorithms for quadratic optimization with fast convergence rates
From MaRDI portal
Publication:409265
DOI10.1007/s10589-010-9319-5zbMath1262.90122OpenAlexW2077067982MaRDI QIDQ409265
Luc Pronzato, Anatoly A. Zhigljavsky
Publication date: 12 April 2012
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://orca.cardiff.ac.uk/15200/1/chebyshev-V3_revised.pdf
Related Items (7)
Steepest descent method with random step lengths ⋮ An asymptotically optimal gradient algorithm for quadratic optimization with low computational cost ⋮ Estimation of spectral bounds in gradient algorithms ⋮ Delayed Gradient Methods for Symmetric and Positive Definite Linear Systems ⋮ On the steplength selection in gradient methods for unconstrained optimization ⋮ A second-order gradient method for convex minimization ⋮ Level set of the asymptotic rate of convergence for the method of steepest descent
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On a successive transformation of probability distribution and its application to the analysis of the optimum gradient method
- Asymptotic behaviour of a family of gradient algorithms in \(\mathbb R^{ d }\) and Hilbert spaces
- A new gradient method with an optimal stepsize property
- On the asymptotic directions of the s-dimensional optimum gradient method
- Studying Convergence of Gradient Algorithms Via Optimal Experimental Design Theory
- A Dynamical-System Analysis of the Optimum s-Gradient Algorithm
- Two-Point Step Size Gradient Methods
- Topics in Advanced Econometrics
- Renormalised steepest descent in Hilbert space converges to a two-point attractor.
This page was built for publication: Gradient algorithms for quadratic optimization with fast convergence rates