Limitations of the approximation capabilities of neural networks with one hidden layer

From MaRDI portal
Publication:1923890

DOI10.1007/BF02124745zbMath0855.41026MaRDI QIDQ1923890

Xin Li, Charles K. Chui, Hrushikesh N. Mhaskar

Publication date: 13 October 1996

Published in: Advances in Computational Mathematics (Search for Journal in Brave)




Related Items (20)

Deep distributed convolutional neural networks: UniversalityTheoretical issues in deep networksOn the approximation by single hidden layer feedforward neural networks with fixed weightsTheory of deep convolutional neural networks: downsamplingA deep network construction that adapts to intrinsic dimensionality beyond the domainApproximating smooth and sparse functions by deep neural networks: optimal approximation rates and saturationFunction approximation by deep networksLearning sparse and smooth functions by deep sigmoid netsApproximation of nonlinear functionals using deep ReLU networksDeep nonparametric regression on approximate manifolds: nonasymptotic error bounds with polynomial prefactorsApplication of radial basis function and generalized regression neural networks in nonlinear utility function specification for travel mode choice modellingComplexity of neural network approximation with limited information: A worst case approachLimitations of shallow nets approximationUniversality of deep convolutional neural networksTheory of deep convolutional neural networks. II: Spherical analysisDeep vs. shallow networks: An approximation theory perspectiveOn simultaneous approximations by radial basis function neural networksApproximative compactness of linear combinations of characteristic functionsExtension of localised approximation by neural networksConstructive approximate interpolation by neural networks



Cites Work


This page was built for publication: Limitations of the approximation capabilities of neural networks with one hidden layer