A gradient search interpretation of the super-exponential algorithm (Q2706013)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: A gradient search interpretation of the super-exponential algorithm |
scientific article
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | A gradient search interpretation of the super-exponential algorithm |
scientific article |
Statements
A gradient search interpretation of the super-exponential algorithm (English)
0 references
19 March 2001
0 references
Hadamard power
0 references
stationary points
0 references
gradient method
0 references
super-exponential algorithm
0 references
blind channel equalization
0 references
Hadamard exponentiation
0 references
gradient search
0 references
This correspondence reviews the super-exponential algorithm proposed by \textit{O. S. Shalvi} and \textit{E. Weinstein} [ibid. 39, 504-519 (1993; Zbl 0782.93090)] for blind channel equalization. The principle of this algorithm -- Hadamard exponentiation, projection over the set of attainable combined channel-equalizer impulse responses followed by a normalization -- is shown to coincide with a gradient search of an extremum of a cost function. The cost function belongs to the family of functions given as the ratio of the standard \(\ell_{2p}\) and \(\ell_2\) sequence norms, where \(p> 1\). This family is very relevant in blind channel equalization, tracing back to \textit{D. Donoho's} work on minimum entropy deconvolution [Applied time series analysis II, Proc. Symp., Tulsa/USA 1980, 565-608 (1981; Zbl 0481.62075)] and also underlying the Godard (or constant modulus) and the earlier Shalvi-Weinstein algorithms [\textit{D. N. Godard}, Self-recovering equalization and carfrier tracking in two-dimensional data communication systems, IEEE Trans. Commun. COM-28, 1867-1875 (1980) and \textit{O. S. Shalvi} and \textit{E. Weinstein}, IEEE Trans. Inf. Theory 36, 312-321 (1990; Zbl 0704.94001)]. Using this gradient search interpretation, which is more tractable for analytical study, we give a simple proof of convergence for the super-exponential algorithm. Finally, we show that the gradient step-size choice giving rise to the super-exponential algorithm is optimal.
0 references