Performance of the Low-Rank TT-SVD for Large Dense Tensors on Modern MultiCore CPUs
DOI10.1137/21M1395545zbMath1501.65162arXiv2102.00104OpenAlexW4285387162WikidataQ114074097 ScholiaQ114074097MaRDI QIDQ5095475
Jonas Thies, Melven Röhrig-Zöllner, Achim Basermann
Publication date: 9 August 2022
Published in: SIAM Journal on Scientific Computing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2102.00104
tensor decompositionhigh-dimensional problemsperformance modelingTT-formathigher-order SVD high-performence computing
Parallel numerical computation (65Y05) Complexity and performance of numerical algorithms (65Y20) Numerical linear algebra (65F99)
Uses Software
Cites Work
- Tensor-Train Decomposition
- \(O(d \log N)\)-quantics approximation of \(N\)-\(d\) tensors in high-dimensional numerical modeling
- A new tensor decomposition
- An introduction to hierarchical (\(\mathcal H\)-) rank and TT-rank of tensors with examples
- Spectral Tensor-Train Decomposition
- Two-Level QTT-Tucker Format for Optimized Tensor Calculus
- A literature survey of low-rank tensor approximation techniques
- Communication-optimal Parallel and Sequential QR and LU Factorizations
- Model Reduction With MapReduce-enabled Tall and Skinny Singular Value Decomposition
- Breaking the Curse of Dimensionality, Or How to Use SVD in Many Dimensions
- Communication Avoiding Rank Revealing QR Factorization with Column Pivoting
- Increasing the Performance of the Jacobi--Davidson Method by Blocking
- LAPACK Users' Guide
- Communication lower bounds and optimal algorithms for numerical linear algebra
- Linnea
- Parallel Algorithms for Tensor Train Arithmetic
- Stochastic Gradients for Large-Scale Tensor Decomposition
- Algorithm 844
- Unitary Triangularization of a Nonsymmetric Matrix
- Q-less QR decomposition in inner product spaces
This page was built for publication: Performance of the Low-Rank TT-SVD for Large Dense Tensors on Modern MultiCore CPUs