Differentiating the method of conjugate gradients (Q2877081)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Differentiating the method of conjugate gradients |
scientific article; zbMATH DE number 6333362
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Differentiating the method of conjugate gradients |
scientific article; zbMATH DE number 6333362 |
Statements
21 August 2014
0 references
conjugate gradient algorithm
0 references
Lanczos algorithm
0 references
perturbation analysis
0 references
spectral norm condition number
0 references
numerical experiment
0 references
Differentiating the method of conjugate gradients (English)
0 references
This paper introduces a first-order perturbation analysis of the conjugate gradient method, which is widely used for the iterative solution of a large sparse system of linear equations \(Ax=b\), where the matrix \(A\in \mathbb{R} ^{n\times n}\) is symmetric positive definite. The authors derive an expression for \(J_{k}\), the Jacobian of \(x_{k}\) with respect to \(b\), in terms of the matrices arising from the Lanczos algorithm and some additional polynomials. This allows to obtain bounds on the normwise relative error \( \left\| J_{k}-A^{-1}\right\| _{2}/\left\| A^{-1}\right\| _{2},\) as well as on \(\left\| J_{k}\right\| _{2}\), the spectral norm condition number of \(x_{k}\) \ with respect to the perturbation in \(b\). Furthermore, algorithms for computing \(J_{k}v\) and \(J_{k}^{T}v\) for a given vector \(v\) are discussed. Detailed numerical experiments are reported to illustrate the presented theoretical results.
0 references