Approximation results for gradient flow trained shallow neural networks in \(1d\)
From MaRDI portal
Publication:6648717
DOI10.1007/s00365-024-09694-0MaRDI QIDQ6648717
Gerrit Welper, Russell Gentile
Publication date: 5 December 2024
Published in: Constructive Approximation (Search for Journal in Brave)
Artificial neural networks and deep learning (68T07) Numerical optimization and variational techniques (65K10) Approximation by arbitrary nonlinear expressions; widths and entropy (41A46)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Tail inequalities for sums of random matrices that depend on the intrinsic dimension
- Provable approximation properties for deep neural networks
- Approximation rates for neural networks with general activation functions
- Efficient approximation of solutions of parametric linear transport equations by ReLU DNNs
- Constructive deep ReLU neural network approximation
- A theoretical analysis of deep neural networks and parametric PDEs
- Nonlinear approximation and (deep) ReLU networks
- Approximation spaces of deep neural networks
- The Barron space and the flow-induced function spaces for neural network models
- High-order approximation rates for shallow neural networks with cosine and \(\mathrm{ReLU}^k\) activation functions
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Gradient descent optimizes over-parameterized deep ReLU networks
- Nonlinear approximation via compositions
- Error bounds for approximations with deep ReLU networks
- Interpolation theory for Sobolev functions with partially vanishing trace on irregular open sets
- On some extensions of Bernstein's inequality for self-adjoint operators
- Linear evolution equations of hyperbolic type. II
- Greedy training algorithms for neural networks and applications to PDEs
- Approximation by Combinations of ReLU and Squared ReLU Ridge Functions With <inline-formula> <tex-math notation="LaTeX">$\ell^1$ </tex-math> </inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\ell^0$ </tex-math> </inline-formula> Controls
- High-Dimensional Probability
- The Gap between Theory and Practice in Function Approximation with Deep Neural Networks
- Deep Neural Network Approximation Theory
- Optimal Convergence Rates for the Orthogonal Greedy Algorithm
- Deep ReLU networks and high-order finite element methods
- Error bounds for approximations with deep ReLU neural networks in Ws,p norms
- Deep Network Approximation for Smooth Functions
- Better Approximations of High Dimensional Smooth Functions by Deep Neural Networks with Rectified Power Units
- Breaking the Curse of Dimensionality with Convex Neural Networks
- An Introduction to Matrix Concentration Inequalities
- The Modern Mathematics of Deep Learning
- Neural network approximation
This page was built for publication: Approximation results for gradient flow trained shallow neural networks in \(1d\)
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6648717)