The Gap between Theory and Practice in Function Approximation with Deep Neural Networks
DOI10.1137/20M131309XzbMath1483.65028arXiv2001.07523OpenAlexW3158148831MaRDI QIDQ4999396
Publication date: 6 July 2021
Published in: SIAM Journal on Mathematics of Data Science (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2001.07523
Artificial neural networks and deep learning (68T07) Orthogonal functions and polynomials, general theory of nontrigonometric harmonic analysis (42C05) Numerical interpolation (65D05) Rate of convergence, degree of approximation (41A25) Algorithms for approximation of functions (65D15) Complexity and performance of numerical algorithms (65Y20) Approximation by arbitrary nonlinear expressions; widths and entropy (41A46) Sampling theory in information and communication theory (94A20)
Related Items (16)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Convergence of quasi-optimal stochastic Galerkin methods for a class of PDES with random coefficients
- Convergence rates of best \(N\)-term Galerkin approximations for a class of elliptic SPDEs
- Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations
- Interpolation via weighted \(\ell_{1}\) minimization
- Numerical integration using sparse grids
- Infinite-dimensional compressed sensing and function interpolation
- Multilayer feedforward networks are universal approximators
- Adaptive sparse grid construction in a context of local anisotropy and multiple hierarchical parents
- The Deep Ritz Method: a deep learning-based numerical algorithm for solving variational problems
- Exponential convergence of the deep neural network approximation for analytic functions
- A dynamically adaptive sparse grids method for quasi-optimal interpolation of multidimensional functions
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Error bounds for approximations with deep ReLU networks
- Analysis of quasi-optimal polynomial approximations for parameterized PDEs with deterministic and stochastic coefficients
- Correcting for unknown errors in sparse high-dimensional function approximation
- ANALYTIC REGULARITY AND POLYNOMIAL APPROXIMATION OF PARAMETRIC AND STOCHASTIC ELLIPTIC PDE'S
- ON THE OPTIMAL POLYNOMIAL APPROXIMATION OF STOCHASTIC PDES BY GALERKIN AND COLLOCATION METHODS
- Probing the Pareto Frontier for Basis Pursuit Solutions
- A Sparse Grid Stochastic Collocation Method for Partial Differential Equations with Random Input Data
- Polynomial approximation via compressed sensing of high-dimensional functions on lower sets
- Deep learning in high dimension: Neural network expression rates for generalized polynomial chaos expansions in UQ
- Stochastic finite element methods for partial differential equations with random input data
- Deep ReLU networks and high-order finite element methods
- Error bounds for approximations with deep ReLU neural networks in Ws,p norms
- Deep Network Approximation Characterized by Number of Neurons
- A mixed ℓ1 regularization approach for sparse simultaneous approximation of parameterized PDEs
- Solving inverse problems using data-driven models
- Approximation of high-dimensional parametric PDEs
- Breaking the Curse of Dimensionality with Convex Neural Networks
- Approximation by superpositions of a sigmoidal function
This page was built for publication: The Gap between Theory and Practice in Function Approximation with Deep Neural Networks