Convergence analysis of Oja's iteration for solving online PCA with nonzero-mean samples
DOI10.1007/s11425-018-9554-4zbMath1465.62116OpenAlexW3017369631MaRDI QIDQ829393
Publication date: 6 May 2021
Published in: Science China. Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11425-018-9554-4
Computational methods for problems pertaining to statistics (62-08) Factor analysis and principal components; correspondence analysis (62H25) Numerical computation of eigenvalues and eigenvectors of matrices (65F15) Nonconvex programming, global optimization (90C26) Stochastic approximation (62L20) Statistical aspects of big data and data science (62R07)
Related Items (2)
Cites Work
- Low-rank incremental methods for computing dominant singular subspaces
- Near-optimal stochastic approximation for online principal component estimation
- Global convergence of Oja's PCA learning algorithm with a non-zero-approaching adaptive learning rate
- On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix
- A simplified neuron model as a principal component analyzer
- Method of stochastic approximation in the determination of the largest eigenvalue of the mathematical expectation of random matrices
- An Introduction to Matrix Concentration Inequalities
- Online Principal Component Analysis in High Dimension: Which Algorithm to Choose?
This page was built for publication: Convergence analysis of Oja's iteration for solving online PCA with nonzero-mean samples