Accurate solution of structured least squares problems via rank-revealing decompositions (Q2866227)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Accurate solution of structured least squares problems via rank-revealing decompositions |
scientific article; zbMATH DE number 6238078
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Accurate solution of structured least squares problems via rank-revealing decompositions |
scientific article; zbMATH DE number 6238078 |
Statements
13 December 2013
0 references
accurate solutions
0 references
least squares problems
0 references
Moore-Penrose pseudoinverse
0 references
multiplicative perturbation theory
0 references
rank revealing decompositions
0 references
structured matrices
0 references
algorithm
0 references
spectral condition number
0 references
numerical experiment
0 references
Cauchy matrix
0 references
ill-conditioned
0 references
Accurate solution of structured least squares problems via rank-revealing decompositions (English)
0 references
A solution method for least square problems \(\min_x\|b-Ax\|_2\), where \(A\) is a given rectangular matrix and \(b\) a given vector, is analyzed. The first stage of the algorithm consists in the rank-revealing decomposition of the matrix \(A=XDY\), where \(X\), \(Y\) are the well-conditioned rectangular matrices and \(D\) is the non-singular diagonal matrix. It is assumed that this decomposition is computable accurately. The second stage of the algorithm is based on three steps: (1) compute the solution \(x_1\) of \(\min_x\|b-Xx\|_2\); (2) compute the solution \(x_2\) of \(Dx_2=x_1\); (3) compute the solution \(x_0\) of \(Yx=x_2\). The error analysis shows that the relative error between the exact solution \(x_0\) and the computed solution \(\widehat{x}_0\) is bounded as follows: NEWLINE\[NEWLINE \frac{\|\widehat{x}_0-{x}_0\|_2}{\|{x}_0\|_2} \leq {\mathtt u} f(m,n) \left( \kappa_2(Y)+\kappa_2(X) \frac{\|A^\dagger\|_2\|b\|_2}{\|x_0\|_2}, \right) NEWLINE\]NEWLINE where \({\mathtt u}\) is the unit round-off of the computer, \(f(m,n)\) is a modestly growing function of \(m\) and \(n\), and \(A^\dagger\) is the Moore-Penrose pseudoinverse of \(A\). This bound improves significantly the classical bounds for algorithms solving the problem (it does not depend immediately on the spectral condition number of \(A\)). The numerical experiments are performed by the Cauchy matrix that is ill-conditioned when its size is large. It is observed that the relative error is always of order \({\mathtt u}\) times a small constant.NEWLINENEWLINEThe paper is well written and simple understood. It may be suitable for students or researchers who want to implement or analyze this method.
0 references