Global convergence of Oja's PCA learning algorithm with a non-zero-approaching adaptive learning rate
From MaRDI portal
Publication:857392
DOI10.1016/j.tcs.2006.07.012zbMath1178.68439OpenAlexW2009787688MaRDI QIDQ857392
Jian Cheng Lv, Zhang Yi, Kok Kiong Tan
Publication date: 14 December 2006
Published in: Theoretical Computer Science (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.tcs.2006.07.012
global convergenceprincipal component analysisdeterministic discrete time systemOja's PCA learning algorithm
Related Items (6)
Convergence analysis of Oja's iteration for solving online PCA with nonzero-mean samples ⋮ Robust classifier using distance-based representation with square weights ⋮ Adaptive multiple minor directions extraction in parallel using a PCA neural network ⋮ Another neural network based approach for computing eigenvalues and eigenvectors of real skew-symmetric matrices ⋮ Online Principal Component Analysis in High Dimension: Which Algorithm to Choose? ⋮ Stochastic Gauss-Newton algorithms for online PCA
Cites Work
- Unnamed Item
- Adaptive algorithms for first principal eigenvector computation
- On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix
- A simplified neuron model as a principal component analyzer
- Generalized neural networks for spectral analysis: dynamics and Liapunov functions
- Multilayer dynamic neural networks for non-linear system on-line identification
This page was built for publication: Global convergence of Oja's PCA learning algorithm with a non-zero-approaching adaptive learning rate