Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness
From MaRDI portal
Publication:2057701
DOI10.1016/j.neunet.2020.06.024zbMath1475.68315arXiv1905.11427OpenAlexW3039204554WikidataQ97517817 ScholiaQ97517817MaRDI QIDQ2057701
Pengzhan Jin, Lu Lu, George Em. Karniadakis, Yi-Fa Tang
Publication date: 7 December 2021
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1905.11427
neural networksdata distributiongeneralization errorlearnabilitycover complexityneural network smoothness
Related Items (6)
Approximation capabilities of measure-preserving neural networks ⋮ Reliable extrapolation of deep neural operators informed by physics or sparse observations ⋮ Quantification on the generalization performance of deep neural network with Tychonoff separation axioms ⋮ Physics-Informed Neural Networks with Hard Constraints for Inverse Design ⋮ Mosaic flows: a transferable deep learning framework for solving PDEs on unseen domains ⋮ Deep learning architectures for nonlinear operator functions and nonlinear inverse problems
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Balls in \(\mathbb{R}^k\) do not cut all subsets of \(k+2\) points
- Multilayer feedforward networks are universal approximators
- Large-Scale Machine Learning with Stochastic Gradient Descent
- Robust Large Margin Deep Neural Networks
- 10.1162/153244303321897690
- On the information bottleneck theory of deep learning
- Approximation by superpositions of a sigmoidal function
- The elements of statistical learning. Data mining, inference, and prediction
This page was built for publication: Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness