Robustness conditions of the LMS algorithm with time-variant matrix step-size. (Q1575757)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Robustness conditions of the LMS algorithm with time-variant matrix step-size. |
scientific article; zbMATH DE number 1493522
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Robustness conditions of the LMS algorithm with time-variant matrix step-size. |
scientific article; zbMATH DE number 1493522 |
Statements
Robustness conditions of the LMS algorithm with time-variant matrix step-size. (English)
0 references
21 August 2000
0 references
Gradient-type algorithms commonly employ a scalar step-size, i.e., each entry of the regression vector is multiplied by the same value before updating the coefficients. More flexibility, however, is obtained when this step-size is of matrix size. It allows not only for the individual scaling of the entries of the regression vector but rotations and decorrelations are possible as well due to the choice of the matrix. A well-known example for the use of a fixed step-size matrix is the Newton-LMS algorithm. For such a fixed step-size matrix, conditions are well known under which a gradient-type algorithm converges. This article, however, presents robustness and convergence conditions for a least-mean-square (LMS) algorithm with time-variant matrix step-size. An example of a channel estimator used in a cellular hand-phone is given that shows that the choice of a particular step-size matrix leads to considerable improvement over the fixed step-size case.
0 references
Time-variant step-size matrix
0 references
Least-mean-squares algorithm
0 references
Channel estimation
0 references
Cellular phone
0 references