Convergence bounds for nonlinear least squares and applications to tensor recovery
From MaRDI portal
Publication:6375022
arXiv2108.05237MaRDI QIDQ6375022
Publication date: 11 August 2021
Abstract: We consider the problem of approximating a function in general nonlinear subsets of when only a weighted Monte Carlo estimate of the -norm can be computed. Of particular interest in this setting is the concept of sample complexity, the number of samples that are necessary to recover the best approximation. Bounds for this quantity have been derived in a previous work and depend primarily on the model class and are not influenced positively by the regularity of the sought function. This result however is only a worst-case bound and is not able to explain the remarkable performance of iterative hard thresholding algorithms that is observed in practice. We reexamine the results of the previous paper and derive a new bound that is able to utilize the regularity of the sought function. A critical analysis of our results allows us to derive a sample efficient algorithm for the model set of low-rank tensors. The viability of this algorithm is demonstrated by recovering quantities of interest for a classical high-dimensional random partial differential equation.
Has companion code repository: https://github.com/ptrunschke/l1_salsa_test
Analysis of algorithms and problem complexity (68Q25) General nonlinear regression (62J02) Complexity and performance of numerical algorithms (65Y20) Multilinear algebra, tensor calculus (15A69) Approximation by other special function classes (41A30)
This page was built for publication: Convergence bounds for nonlinear least squares and applications to tensor recovery