Neural network approximation: three hidden layers are enough
From MaRDI portal
Publication:6054944
DOI10.1016/j.neunet.2021.04.011arXiv2010.14075OpenAlexW3097724114MaRDI QIDQ6054944
Shijun Zhang, Haizhao Yang, Zuowei Shen
Publication date: 28 September 2023
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2010.14075
continuous functionexponential convergencecurse of dimensionalitydeep neural networkfloor-exponential-step activation function
Related Items (16)
Deep Adaptive Basis Galerkin Method for High-Dimensional Evolution Equations With Oscillatory Solutions ⋮ Convergence of Physics-Informed Neural Networks Applied to Linear Second-Order Elliptic Interface Problems ⋮ Simultaneous neural network approximation for smooth functions ⋮ Framework for segmented threshold \(\ell_0\) gradient approximation based network for sparse signal recovery ⋮ Deep Neural Networks for Solving Large Linear Systems Arising from High-Dimensional Problems ⋮ Approximation capabilities of measure-preserving neural networks ⋮ Friedrichs Learning: Weak Solutions of Partial Differential Equations via Deep Learning ⋮ Efficient estimation of average derivatives in NPIV models: simulation comparisons of neural network estimators ⋮ Active learning based sampling for high-dimensional nonlinear partial differential equations ⋮ Noncompact uniform universal approximation ⋮ Deep Neural Networks with ReLU-Sine-Exponential Activations Break Curse of Dimensionality in Approximation on Hölder Class ⋮ A Scalable Deep Learning Approach for Solving High-Dimensional Dynamic Optimal Transport ⋮ Designing universal causal deep learning models: The geometric (Hyper)transformer ⋮ On mathematical modeling in image reconstruction and beyond ⋮ A three layer neural network can represent any multivariate function ⋮ Optimal approximation rate of ReLU networks in terms of width and depth
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Optimization by Simulated Annealing
- On a constructive proof of Kolmogorov's superposition theorem
- Lower bounds for approximation by MLP neural networks
- Error bounds for deep ReLU networks using the Kolmogorov-Arnold superposition theorem
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Nonlinear approximation via compositions
- Overcoming the curse of dimensionality in the approximative pricing of financial derivatives with default risks
- Nonparametric regression using deep neural networks with ReLU activation function
- A priori estimates of the population risk for two-layer neural networks
- Error bounds for approximations with deep ReLU networks
- A consensus-based model for global optimization and its mean-field limit
- Universal approximation bounds for superpositions of a sigmoidal function
- A mean field view of the landscape of two-layer neural networks
- Deep Network With Approximation Error Being Reciprocal of Width to Power of Square Root of Depth
- New Error Bounds for Deep ReLU Networks Using Sparse Grids
- Deep Network Approximation Characterized by Number of Neurons
- A note on the expressive power of deep rectified linear unit networks in high‐dimensional spaces
- A Simplex Method for Function Minimization
- The Kolmogorov-Arnold representation theorem revisited
This page was built for publication: Neural network approximation: three hidden layers are enough