Low-rank approximation and completion of positive tensors (Q2827064)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Low-rank approximation and completion of positive tensors |
scientific article; zbMATH DE number 6637583
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Low-rank approximation and completion of positive tensors |
scientific article; zbMATH DE number 6637583 |
Statements
12 October 2016
0 references
tensor completion
0 references
tensor approximation
0 references
categorical regression
0 references
0 references
0 references
0 references
0.85776794
0 references
0.85232586
0 references
0.85016763
0 references
0.8451659
0 references
0.8363066
0 references
0.8356186
0 references
0.8259111
0 references
0.8250729
0 references
0.82186395
0 references
0.81957245
0 references
Low-rank approximation and completion of positive tensors (English)
0 references
This paper defines a hierarchical decomposition of positive tensors using a convex optimization. This decomposition is shown to exist, be numerically well-posed, and coincides with the usual tensor CP decomposition [\textit{T. G. Kolda} and \textit{B. W. Bader}, SIAM Rev. 51, No. 3, 455--500 (2009; Zbl 1173.65029)] in specific cases. A randomized algorithm that computes the hierarchical decomposition and, subsequently, the best rank-1 approximation of positive tensors in polynomial time depends on the degrees-of-freedom of the tensor rather than on the potentially exponentially larger total number of tensor entries.NEWLINENEWLINEThis algorithm is then applied on the problem of tensor completion [\textit{S. Gandy} et al., Inverse Probl. 27, No. 2, Article ID 025010, 19 p. (2011; Zbl 1211.15036)], in which noisy tensors entries are observed and used to estimate missing entries. A standard approach to this problem requires the exponential (in tensor order) number of measurements. A new algorithm requiring only a polynomial number of measurements for specific cases is provided. In the case of a rank-1 tensor, the number of needed measurements \(O((rp^2)^{1+\zeta})\), for any \(\zeta > 0\), is a quadratic factor away from the information-theoretic lower bound of \(O(rp)\). Note that sparsity in the tensor leads to further algorithm improvement.NEWLINENEWLINEThe paper concludes with numerical examples using synthetic data as well as data from bioengineered metabolic network showing that this new approach outperforms other tensor completion algorithms.
0 references