Randomized Algorithms for Low-Rank Tensor Decompositions in the Tucker Format
From MaRDI portal
Publication:5027024
DOI10.1137/19M1261043zbMATH Open1484.65091arXiv1905.07311OpenAlexW3008842210MaRDI QIDQ5027024
Author name not available (Why is that?)
Publication date: 3 February 2022
Published in: (Search for Journal in Brave)
Abstract: Many applications in data science and scientific computing involve large-scale datasets that are expensive to store and compute with, but can be efficiently compressed and stored in an appropriate tensor format. In recent years, randomized matrix methods have been used to efficiently and accurately compute low-rank matrix decompositions. Motivated by this success, we focus on developing randomized algorithms for tensor decompositions in the Tucker representation. Specifically, we present randomized versions of two well-known compression algorithms, namely, HOSVD and STHOSVD. We present a detailed probabilistic analysis of the error of the randomized tensor algorithms. We also develop variants of these algorithms that tackle specific challenges posed by large-scale datasets. The first variant adaptively finds a low-rank representation satisfying a given tolerance and it is beneficial when the target-rank is not known in advance. The second variant preserves the structure of the original tensor, and is beneficial for large sparse tensors that are difficult to load in memory. We consider several different datasets for our numerical experiments: synthetic test tensors and realistic applications such as the compression of facial image samples in the Olivetti database and word counts in the Enron email dataset.
Full work available at URL: https://arxiv.org/abs/1905.07311
No records found.
No records found.
This page was built for publication: Randomized Algorithms for Low-Rank Tensor Decompositions in the Tucker Format
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5027024)