Strong overall error analysis for the training of artificial neural networks via random initializations
From MaRDI portal
Publication:6617376
DOI10.1007/S40304-022-00292-9zbMATH Open1548.68219MaRDI QIDQ6617376
Adrian Riekert, Arnulf Jentzen
Publication date: 10 October 2024
Published in: Communications in Mathematics and Statistics (Search for Journal in Brave)
Artificial neural networks and deep learning (68T07) Numerical optimization and variational techniques (65K10)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Moment inequalities for sums of dependent random variables under projective conditions
- Multilayer feedforward networks are universal approximators
- A distribution-free theory of nonparametric regression
- Deep neural network approximations for solutions of PDEs based on Monte Carlo algorithms
- Gradient descent optimizes over-parameterized deep ReLU networks
- A comparative analysis of optimization and generalization properties of two-layer neural network and random feature models under gradient descent dynamics
- A priori estimates of the population risk for two-layer neural networks
- Error bounds for approximations with deep ReLU networks
- Universality of deep convolutional neural networks
- Local Rademacher complexities
- On the mathematical foundations of learning
- Universal approximation bounds for superpositions of a sigmoidal function
- Analysis of the Generalization Error: Empirical Risk Minimization over Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the Numerical Approximation of Black--Scholes Partial Differential Equations
- Full error analysis for the training of deep neural networks
- Approximation by superpositions of a sigmoidal function
This page was built for publication: Strong overall error analysis for the training of artificial neural networks via random initializations
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6617376)