Comparison of Accuracy and Scalability of Gauss--Newton and Alternating Least Squares for CANDECOMC/PARAFAC Decomposition
DOI10.1137/20M1344561WikidataQ114074193 ScholiaQ114074193MaRDI QIDQ5009907
Navjot Singh, Hongru Yang, Linjian Ma, Edgar Solomonik
Publication date: 9 August 2021
Published in: SIAM Journal on Scientific Computing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1910.12331
Gauss-Newton methodtensor decompositionalternating least squaresCP decompositioncyclops tensor framework
Numerical optimization and variational techniques (65K10) Parallel numerical computation (65Y05) Approximation algorithms (68W25) Complexity and performance of numerical algorithms (65Y20) Numerical algorithms for computer arithmetic, etc. (65Y04)
Related Items (1)
Uses Software
Cites Work
- Unnamed Item
- Tensor Decompositions and Applications
- Numerical CP decomposition of some difficult tensors
- How to multiply matrices faster
- A comparison of algorithms for fitting the PARAFAC model
- An enhanced line search scheme for complex-valued tensor decompositions. Application in DS-CDMA
- CANDELINC: A general approach to multidimensional analysis of many-way arrays with linear constraints on parameters
- Three-way arrays: rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics
- Computing dense tensor decompositions with optimal dimension trees
- Some convergence results on the regularized alternating least-squares method for tensor decomposition
- Gaussian elimination is not optimal
- Minimization of functions having Lipschitz continuous first partial derivatives
- Optimization-Based Algorithms for Tensor Decompositions: Canonical Polyadic Decomposition, Decomposition in Rank-$(L_r,L_r,1)$ Terms, and a New Generalization
- High-Dimensional Covariance Decomposition into Sparse Markov and Independence Models
- Tensor decompositions for learning latent variable models
- Exploiting Symmetry in Tensors for High Performance: Multiplication with Symmetric Tensors
- The bilinear complexity and practical algorithms for matrix multiplication
- Fast Parallel Matrix Inversion Algorithms
- ScaLAPACK Users' Guide
- Parallel Candecomp/Parafac Decomposition of Sparse Tensors Using Dimension Trees
- Tensor Decomposition for Signal Processing and Machine Learning
- A Practical Randomized CP Tensor Decomposition
- Nesterov acceleration of alternating least squares for canonical tensor decomposition: Momentum step size selection and restart mechanisms
- Communication-optimal Parallel and Sequential Cholesky Decomposition
- Computing the Gradient in Optimization Algorithms for the CP Decomposition in Constant Memory through Tensor Blocking
- Low Complexity Damped Gauss--Newton Algorithms for CANDECOMP/PARAFAC
- Enhanced Line Search: A Novel Method to Accelerate PARAFAC
This page was built for publication: Comparison of Accuracy and Scalability of Gauss--Newton and Alternating Least Squares for CANDECOMC/PARAFAC Decomposition