Lower bounds for artificial neural network approximations: a proof that shallow neural networks fail to overcome the curse of dimensionality
DOI10.1016/j.jco.2023.101746arXiv2103.04488OpenAlexW3133651710MaRDI QIDQ6155895
Shokhrukh Ibragimov, Sarah Koppensteiner, Philipp Grohs, Arnulf Jentzen
Publication date: 7 June 2023
Published in: Journal of Complexity (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2103.04488
lower boundsartificial neural networkscurse of dimensionalityartificial neural network approximationsovercoming the curse of dimensionality
Artificial intelligence (68Txx) Numerical methods for partial differential equations, initial value and time-dependent initial-boundary value problems (65Mxx) Approximations and expansions (41Axx)
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Tractability of multivariate problems. Volume I: Linear information
- Tractability of multivariate problems. Volume II: Standard information for functionals.
- Complexity of Gaussian-radial-basis networks approximating smooth functions
- A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training
- Lower bounds for approximation by MLP neural networks
- Approximation and estimation bounds for artificial neural networks
- Rates of convex approximation in non-Hilbert spaces
- Approximation and learning of convex superpositions
- Multilayer feedforward networks are universal approximators
- Proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of Kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients
- DNN expression rate analysis of high-dimensional PDEs: application to option pricing
- Deep neural network approximations for solutions of PDEs based on Monte Carlo algorithms
- On the approximation by single hidden layer feedforward neural networks with fixed weights
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations
- Error bounds for approximations with deep ReLU networks
- Space-time error estimates for deep neural network approximations for differential equations
- An overview on deep learning-based approximation methods for partial differential equations
- Deep vs. shallow networks: An approximation theory perspective
- A Remark on Stirling's Formula
- Minimization of Error Functionals over Perceptron Networks
- Geometric Upper Bounds on Rates of Variable-Basis Approximation
- Universal approximation bounds for superpositions of a sigmoidal function
- Comparison of worst case errors in linear and neural network approximation
- Approximation by Combinations of ReLU and Squared ReLU Ridge Functions With <inline-formula> <tex-math notation="LaTeX">$\ell^1$ </tex-math> </inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\ell^0$ </tex-math> </inline-formula> Controls
- Algorithms for solving high dimensional PDEs: from nonlinear Monte Carlo to machine learning
- Optimal Approximation with Sparsely Connected Deep Neural Networks
- Analysis of the Generalization Error: Empirical Risk Minimization over Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the Numerical Approximation of Black--Scholes Partial Differential Equations
- Full error analysis for the training of deep neural networks
- Uniform error estimates for artificial neural network approximations for heat equations
- Dependence of Computational Models on Input Dimension: Tractability of Approximation and Optimization Tasks
- Wahrscheinlichkeitstheorie
- Approximation by superpositions of a sigmoidal function
This page was built for publication: Lower bounds for artificial neural network approximations: a proof that shallow neural networks fail to overcome the curse of dimensionality