Sharp Bounds on the Approximation Rates, Metric Entropy, and n-Widths of Shallow Neural Networks
From MaRDI portal
Publication:6489780
DOI10.1007/S10208-022-09595-3MaRDI QIDQ6489780
Jonathan W. Siegel, Jin-Chao Xu
Publication date: 22 April 2024
Published in: Foundations of Computational Mathematics (Search for Journal in Brave)
Approximation by arbitrary nonlinear expressions; widths and entropy (41A46) Neural nets and related approaches to inference from stochastic processes (62M45)
Related Items (2)
Two-layer networks with the \(\text{ReLU}^k\) activation function: Barron spaces and derivative approximation ⋮ An iterative method for the solution of Laplace-like equations in high and very high space dimensions
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Eigenvalues, embeddings and generalised trigonometric functions
- Inequalities of Bernstein-Jackson-type and the degree of compactness of operators in Banach spaces
- Universal Donsker classes and metric entropy
- Gelfand numbers of operators with values in a Hilbert space
- Entropy numbers, s-numbers, and eigenvalue problems
- A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training
- Uniform approximation by neural networks
- Improved upper bounds for approximation by zonotopes
- Information-theoretic determination of minimax rates of convergence
- Optimal nonlinear approximation
- Tight upper bounds for the discrepancy of half-spaces
- Random approximants and neural networks
- Gelfand numbers and widths
- Approximation rates for neural networks with general activation functions
- High-order approximation rates for shallow neural networks with cosine and \(\mathrm{ReLU}^k\) activation functions
- Gelfand numbers and metric entropy of convex hulls in Hilbert spaces
- Approximation and learning by greedy algorithms
- Entropy numbers of convex hulls in Banach spaces and applications
- The sizes of compact subsets of Hilbert space and continuity of Gaussian processes
- General Lagrange and Hermite interpolation in \(R^n\) with applications to finite element methods
- The best constants in the Khintchine inequality
- s-Numbers of operators in Banach spaces
- On Lattices Admitting Unique Lagrange Interpolations
- Universal approximation bounds for superpositions of a sigmoidal function
- Efficient agnostic learning of neural networks with bounded fan-in
- Metric Entropy of Convex Hulls in Hilbert Spaces
- Metric Entropy of Convex Hulls in Banach Spaces
- Bounds on rates of variable-basis and neural-network approximation
- Comparison of worst case errors in linear and neural network approximation
- Approximation by Combinations of ReLU and Squared ReLU Ridge Functions With <inline-formula> <tex-math notation="LaTeX">$\ell^1$ </tex-math> </inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\ell^0$ </tex-math> </inline-formula> Controls
- On a Class of Finite Elements Generated by Lagrange Interpolation
- Finite Neuron Method and Convergence Analysis
- History of Banach Spaces and Linear Operators
- Breaking the Curse of Dimensionality with Convex Neural Networks
- Convex Analysis
- Estimation of Linear Functionals on Sobolev Spaces with Application to Fourier Transforms and Spline Interpolation
- Compressed sensing
- Geometric discrepancy. An illustrated guide
This page was built for publication: Sharp Bounds on the Approximation Rates, Metric Entropy, and n-Widths of Shallow Neural Networks