Deprecated: $wgMWOAuthSharedUserIDs=false is deprecated, set $wgMWOAuthSharedUserIDs=true, $wgMWOAuthSharedUserSource='local' instead [Called from MediaWiki\HookContainer\HookContainer::run in /var/www/html/w/includes/HookContainer/HookContainer.php at line 135] in /var/www/html/w/includes/Debug/MWDebug.php on line 372
Best approximation and inverse results for neural network operators - MaRDI portal

Best approximation and inverse results for neural network operators (Q6595826)

From MaRDI portal





scientific article; zbMATH DE number 7904180
Language Label Description Also known as
English
Best approximation and inverse results for neural network operators
scientific article; zbMATH DE number 7904180

    Statements

    Best approximation and inverse results for neural network operators (English)
    0 references
    0 references
    0 references
    30 August 2024
    0 references
    In this article, approximations by neural network operators are studied and their errors are estimated from above. The neural networks are of a classical type using sigmoidal functions from standard classes. The theorems are restricted to the one-dimensional setting.\N\NThe error estimates are formulated in terms of moduli of smoothness and some are shown to be optimal. The approximation rates are studied with respect to the uniform norm for example under the condition that the sigmoidal function which defines the neuron, call it \(\sigma\), satisfies (for \(x\to-\infty\)) the bound \(\sigma(x)=O(|x|^{-\alpha})\) with an exponent \(\alpha\) greater than one. The error bounds in the Chebyshev norm are shown to be optimal in various subcases. Also inverse theorems are established for some classes of Lipschitz functions.
    0 references
    neural network operators
    0 references
    sigmoidal function
    0 references
    modulus of continuity
    0 references
    Lipschitz classes
    0 references
    inverse theorem of approximation
    0 references
    0 references

    Identifiers