Randomized approximation of summable sequences -- adaptive and non-adaptive (Q6632941)

From MaRDI portal





scientific article; zbMATH DE number 7938832
Language Label Description Also known as
English
Randomized approximation of summable sequences -- adaptive and non-adaptive
scientific article; zbMATH DE number 7938832

    Statements

    Randomized approximation of summable sequences -- adaptive and non-adaptive (English)
    0 references
    0 references
    0 references
    0 references
    5 November 2024
    0 references
    Let \(S\in L(F,G)\) be a linear operator between Banach spaces \(F\) and \(G\). Its result is estimated by \(A_n:=\phi\circ N\) where \(N:F\to\mathbb{R}^n\) extracting \(n\) samples of information from \(f\in F\), and \(\phi:\mathbb{R}^n\to G\) uses this to reconstruct an approximation to \(S(f)\). The deterministic error is measured as \(e^{\mathrm{det}}(n,S)=\inf_{A_n}\sup_{\|f\|_F\le1} \|A_n(f)-S(f)\|_G\). The minimal \(n=n(\varepsilon)\) to get an error not exceeding \(\varepsilon\) is called the complexity.\N\NIt is assumed that there is a class \((A_n^\omega)_{\omega\in\Omega}\) of randomized estimators. Variables and mappings become random and for the error \(e^{\mathrm{ran}}(n,S)\), the expected value should be taken. If \(A_n^\omega\) depends on previous estimates, it is called adaptive (hence nonlinear), otherwise non-adaptive.\N\NIn this paper \(F=\ell_p^m\) and \(G=\ell_q^m\), \(m> n\), \(1\le p,q\le\infty\), and \(S\) is the embedding operator, \(A_n^\omega\) is primarily assumed to be non-adaptive. The cases of interest are \(m\to\infty\) or \(n/m\to 0\). The authors get lower error bounds for a linear (hence non-adaptive) measurement matrix \(N\in\mathbb{R}^{n\times m}\). This gives a term \(\sqrt{\log m}\) in the lower bound for \(n(\varepsilon)\). An important consequence is that to approximate a non-compact operator \(S\) between Banach spaces one needs non-adaptive Monte Carlo methods.\N\NAlso an upper bound for the adaptive methods is computed having a factor \(\sqrt{\log\log m}\). This implies that for \((p,q)=(1,2)\), adaptive algorithms are better than non-adaptive ones by a factor at least \(\sqrt{n/\log n}\).\N\NSeveral results from the literature appear as special cases of the results shown here.
    0 references
    Monte Carlo
    0 references
    information-based complexity
    0 references
    error bounds
    0 references
    compact operator
    0 references
    adaptive sampling
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references