Efficient computation of the exponential operator for large, sparse, symmetric matrices (Q2760328)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Efficient computation of the exponential operator for large, sparse, symmetric matrices |
scientific article; zbMATH DE number 1684489
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Efficient computation of the exponential operator for large, sparse, symmetric matrices |
scientific article; zbMATH DE number 1684489 |
Statements
19 December 2001
0 references
Krylov subspace method
0 references
Chebyshev series expansion
0 references
exponential operator
0 references
large matrices
0 references
sparse matrices
0 references
symmetric matrices
0 references
matrix exponential
0 references
algorithm
0 references
Lanczos process
0 references
largest eigenvalue
0 references
smallest eigenvalue
0 references
Rayleigh quotient-based technique
0 references
error estimates
0 references
numerical experiments
0 references
finite difference
0 references
finite element
0 references
linear parabolic partial differential equations
0 references
0 references
0 references
Efficient computation of the exponential operator for large, sparse, symmetric matrices (English)
0 references
Two techniques for approximation of the matrix exponential are discussed and compared: Krylov subspace approach and Chebyshev series expansion. While the Krylov subspace-based method for computing the matrix exponential does not require any a priori knowledge on the spectrum, the Chebyshev method needs a sharp approximation of its inclusion set. On the other hand, the Chebyshev method requires only six vectors, while the Krylov subspace approach in addition to the cost of the (symmetric) Lanczos process requires to keep all the basis vectors. This may be prohibitive in some cases and the basis vectors must be then recomputed in the second run of the algorithm which leads up to the twice as much computational work as for the standard Lanczos process. The case of symmetric (positive or negative) definite matrices is discussed only. Some implications for the general nonsymmetric case are mentioned in the end of the paper. NEWLINENEWLINENEWLINEIn the theoretical part of the paper, details of implementation and other aspects of both approaches are described. In the Chebyshev method, the largest eigenvalue is approximated by several iterations of the (symmetric) Lanczos process and for the estimation of the smallest eigenvalue a Rayleigh quotient-based technique is used. Several a priori and a posteriori relative error estimates for evaluation of the final iteration number are considered. NEWLINENEWLINENEWLINENumerical experiments in the second part of the paper indicate that the Chebyshev method can be an efficient alternative to the Krylov subspace approach, especially when the memory limitations do not allow to keep all the basis vectors and when the stopping criteria are based on the a priori or a posteriori error estimates. Matrices arising from finite difference and finite element discretizations of 2D and 3D linear parabolic partial differential equations are used in numerical test examples. Reliability of error estimates from the theoretical part is examined and sensitivity of Chebyshev method to initial eigenvalue estimates is discussed.
0 references