Modified conjugate gradient method for the solution of \(Ax=b\) (Q1281489)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Modified conjugate gradient method for the solution of \(Ax=b\) |
scientific article; zbMATH DE number 1267920
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Modified conjugate gradient method for the solution of \(Ax=b\) |
scientific article; zbMATH DE number 1267920 |
Statements
Modified conjugate gradient method for the solution of \(Ax=b\) (English)
0 references
31 May 1999
0 references
To solve \(Ax=b\) (\(A\) is a symmetric positive definite matrix) by the classical conjugate gradient method, the solution \(x_k\) in the \(k\)th step is obtained by minimizing \(\| x-x_k\| _A\) for \(x_k\in K_k(A,b) =\text{span}\{b,Ab,\ldots,A^{k-1}b\}\) where \(\| x\| _A^2=(x,Ax)\). The error can be estimated in terms of \(\kappa(A)\), the condition number of \(A\). In this paper, it is proposed to solve the system by minimizing over the Krylov subspace \(K_k(\sqrt{A},b)\). Then the error can be bounded in terms of \(\kappa(\sqrt{A})\), which is smaller and hence gives a better rate of convergence. It is shown that, once \(b\) and \(\sqrt{A}b\) are computed, then the iterates are given by a short recurrence relation (as in the classical case) which requires only one \(A\)-multiplication per iteration step.
0 references
modified conjugate gradient method
0 references
Krylov subspace method
0 references
convergence
0 references
stability
0 references
condition number
0 references