Uniform approximation rates and metric entropy of shallow neural networks
From MaRDI portal
Publication:2157931
DOI10.1007/s40687-022-00346-yOpenAlexW4286218552WikidataQ114218935 ScholiaQ114218935MaRDI QIDQ2157931
Jonathan W. Siegel, Limin Ma, Jin-Chao Xu
Publication date: 22 July 2022
Published in: Research in the Mathematical Sciences (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s40687-022-00346-y
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Greedy function approximation: A gradient boosting machine.
- On the rate of convergence of greedy algorithms.
- The essential order of approximation for neural networks
- Entropy numbers, s-numbers, and eigenvalue problems
- A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training
- Uniform approximation by neural networks
- Improved upper bounds for approximation by zonotopes
- The Deep Ritz Method: a deep learning-based numerical algorithm for solving variational problems
- Approximation of functions of finite variation by superpositions of a sigmoidal function.
- Some problems in the theory of ridge functions
- Tight upper bounds for the discrepancy of half-spaces
- Random approximants and neural networks
- Some remarks on greedy algorithms
- Representation formulas and pointwise properties for Barron functions
- High-order approximation rates for shallow neural networks with cosine and \(\mathrm{ReLU}^k\) activation functions
- Approximation and learning by greedy algorithms
- A combinatorial problem; stability and order for models and theories in infinitary languages
- On the density of families of sets
- Optimal stable nonlinear approximation
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Greedy Approximation
- Lower bounds for the rate of convergence of greedy algorithms
- Greedy approximation
- Approximation by Ridge Functions and Neural Networks
- Universal approximation bounds for superpositions of a sigmoidal function
- Metric Entropy of Convex Hulls in Hilbert Spaces
- Metric Entropy of Convex Hulls in Banach Spaces
- Bounds on rates of variable-basis and neural-network approximation
- Comparison of worst case errors in linear and neural network approximation
- Approximation by Combinations of ReLU and Squared ReLU Ridge Functions With <inline-formula> <tex-math notation="LaTeX">$\ell^1$ </tex-math> </inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\ell^0$ </tex-math> </inline-formula> Controls
- Sup-norm approximation bounds for networks through probabilistic methods
- Matching pursuits with time-frequency dictionaries
- What Kinds of Functions Do Deep Neural Networks Learn? Insights from Variational Spline Theory
- Optimal Convergence Rates for the Orthogonal Greedy Algorithm
- Deep Network Approximation for Smooth Functions
- Finite Neuron Method and Convergence Analysis
- Breaking the Curse of Dimensionality with Convex Neural Networks
- Constructive Approximation by Superposition of Sigmoidal Functions
- Geometric discrepancy. An illustrated guide
- Approximation by superpositions of a sigmoidal function
This page was built for publication: Uniform approximation rates and metric entropy of shallow neural networks