Pages that link to "Item:Q5083408"
From MaRDI portal
The following pages link to Full error analysis for the training of deep neural networks (Q5083408):
Displaying 12 items.
- A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions (Q2145074) (← links)
- A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions (Q2167333) (← links)
- An analysis of training and generalization errors in shallow and deep networks (Q2185668) (← links)
- Error Analysis and Improving the Accuracy of Winograd Convolution for Deep Neural Networks (Q5066584) (← links)
- Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisation (Q6107984) (← links)
- Lower bounds for artificial neural network approximations: a proof that shallow neural networks fail to overcome the curse of dimensionality (Q6155895) (← links)
- Learning the random variables in Monte Carlo simulations with stochastic gradient descent: Machine learning for parametric PDEs and financial derivative pricing (Q6178392) (← links)
- Deep learning based on randomized quasi-Monte Carlo method for solving linear Kolmogorov partial differential equation (Q6582041) (← links)
- Error analysis for deep neural network approximations of parametric hyperbolic conservation laws (Q6590625) (← links)
- Numerical analysis of physics-informed neural networks and related models in physics-informed machine learning (Q6598418) (← links)
- Strong overall error analysis for the training of artificial neural networks via random initializations (Q6617376) (← links)
- Error analysis for empirical risk minimization over clipped ReLU networks in solving linear Kolmogorov partial differential equations (Q6662424) (← links)