Global convergence of a PCA learning algorithm with a constant learning rate
From MaRDI portal
Publication:2469892
DOI10.1016/j.camwa.2006.08.028zbMath1138.62330OpenAlexW2081700245MaRDI QIDQ2469892
Publication date: 11 February 2008
Published in: Computers \& Mathematics with Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.camwa.2006.08.028
global convergenceneural networksprincipal component analysisdeterministic discrete time systemconstant learning rate
Factor analysis and principal components; correspondence analysis (62H25) Limit theorems in probability theory (60F99) Neural nets and related approaches to inference from stochastic processes (62M45)
Cites Work
- Adaptive algorithms for first principal eigenvector computation
- On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix
- A simplified neuron model as a principal component analyzer
- Generalized neural networks for spectral analysis: dynamics and Liapunov functions
- Analysis of recursive stochastic algorithms
This page was built for publication: Global convergence of a PCA learning algorithm with a constant learning rate