Pages that link to "Item:Q2034567"
From MaRDI portal
The following pages link to Non-convergence of stochastic gradient descent in the training of deep neural networks (Q2034567):
Displaying 11 items.
- Solving high-dimensional Hamilton-Jacobi-Bellman PDEs using neural networks: perspectives from the theory of controlled diffusions and measures on path space (Q825596) (← links)
- Convergence of stochastic gradient descent in deep neural network (Q2025203) (← links)
- Stochastic generalized gradient methods for training nonconvex nonsmooth neural networks (Q2058689) (← links)
- Constructive deep ReLU neural network approximation (Q2067309) (← links)
- A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions (Q2145074) (← links)
- A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions (Q2167333) (← links)
- Gradient descent optimizes over-parameterized deep ReLU networks (Q2183586) (← links)
- On the S-instability and degeneracy of discrete deep learning models (Q5006533) (← links)
- Stationary Density Estimation of Itô Diffusions Using Deep Learning (Q5886225) (← links)
- Deep multimodal autoencoder for crack criticality assessment (Q6089256) (← links)
- Fredholm integral equations for function approximation and the training of neural networks (Q6655076) (← links)