Tensor Decompositions for Learning Latent Variable Models (A Survey for ALT)
From MaRDI portal
Publication:2835615
DOI10.1007/978-3-319-24486-0_2zbMath1471.62367OpenAlexW2274757334MaRDI QIDQ2835615
Sham M. Kakade, Rong Ge, Daniel Hsu, Matus Telgarsky, Anima Anandkumar
Publication date: 30 November 2016
Published in: Lecture Notes in Computer Science (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/978-3-319-24486-0_2
Asymptotic properties of parametric estimators (62F12) Factor analysis and principal components; correspondence analysis (62H25) Estimation in multivariate analysis (62H12) Multilinear algebra, tensor calculus (15A69)
Related Items
Convenient tail bounds for sums of random tensors, General tail bounds for random tensors summation: majorization approach, Eigenvectors of Orthogonally Decomposable Functions
Cites Work
- Unnamed Item
- Unnamed Item
- A spectral algorithm for learning hidden Markov models
- On exchangeable random variables and the statistics of large graphs and hypergraphs
- Asymptotic methods in statistical decision theory
- Full reconstruction of Markov models on evolutionary trees: identifiability and consistency.
- Independent component analysis, a new concept?
- Adaptive blind separation of independent sources: A deflation approach
- A spectral algorithm for latent Dirichlet allocation
- Eigenvalues of a real supersymmetric tensor
- Rank-One Approximation to High Order Tensors
- Tensor decompositions for learning latent variable models
- Learning mixtures of spherical gaussians
- Shifted Power Method for Computing Tensor Eigenpairs
- Numerical Optimization
- On the Best Rank-1 and Rank-(R1 ,R2 ,. . .,RN) Approximation of Higher-Order Tensors
- Symmetric Tensors and Symmetric Tensor Rank
- Perturbation bounds in connection with singular value decomposition
- Learning nonsingular phylogenies and hidden Markov models
- Subtracting a best rank-1 approximation may increase tensor rank