A hybrid iterative method for symmetric positive definite linear systems (Q1911442)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: A hybrid iterative method for symmetric positive definite linear systems |
scientific article; zbMATH DE number 871276
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | A hybrid iterative method for symmetric positive definite linear systems |
scientific article; zbMATH DE number 871276 |
Statements
A hybrid iterative method for symmetric positive definite linear systems (English)
0 references
5 January 1997
0 references
The method describes a combination of the conjugate gradient (CG) method and Richardson iteration to solve the system \(Ax= b\). Because of matrix-vector multiplications, the CG iterations are expensive as compared to the Richardson steps. After an approximate solution \(x_m\) and a corresponding residual \(r_m= b- Ax_m= p_m(A) r_0\) are obtained by \(m\) CG steps, Richardson iterations \(x_{k+ 1}= x_k+ \delta_k r_k\), \(r_k= b- Ax_k\), \(k= m\), \(m+ 1,\dots\) are computed. If \(I\) is an interval that approximately contains the eigenvalues of \(A\), then the \(\delta_k\) are chosen as the reciprocals of the Leja points \(z_k\) for the interval \(I\). The first \(m\) Leja points are defined as the zeros of the residual polynomial \(p_m(z)= (1- z/z_0)\cdots(1- z/z_{m- 1})\) and for \(k\geq m\), \(z_k\) is defined recursively as the \(z\in I\) that maximizes \(z|q_k(z)|\), where \(q_k(z)= (z- z_0)\cdots (z- z_{k- 1})\). This choice of the parameters \(\delta_k\) gives an asymptotically optimal rate of convergence, provided that the spectrum of \(A\) is included in \(I\). These Richaradson iterations are continued until convergence or until it is detected that the interval \(I\) is too small. In the latter case \(m\) new CG steps are performed, from which a new interval \(I\) is obtained and then Richardson iteration is resumed. It is explained in detail how to estimate \(I\) (i.e., the extreme eigenvalues of \(A\)) from the parameters generated by the CG steps. Usually \(m\) is relatively small so that a very efficient method results. Numerical results illustrate the method.
0 references
symmetric positive definite linear systems
0 references
conjugate gradient method
0 references
numerical results
0 references
Richardson iteration
0 references
optimal rate of convergence
0 references
extreme eigenvalues
0 references
0 references
0 references