The convergence of stochastic gradient algorithms applied to learning in neural networks
From MaRDI portal
Publication:1883924
zbMATH Open1060.60501MaRDI QIDQ1883924
Publication date: 19 October 2004
Published in: Automation and Remote Control (Search for Journal in Brave)
Learning and adaptive systems in artificial intelligence (68T05) Strong limit theorems (60F15) (L^p)-limit theorems (60F25)
Related Items (11)
Unnamed Item ⋮ Comparison of four gradient-learning algorithms for neural network Wiener models ⋮ Unnamed Item ⋮ Local Convergence of Recursive Learning to Steady States and Cycles in Stochastic Nonlinear Models ⋮ Unnamed Item ⋮ Taming Neural Networks with TUSLA: Nonconvex Learning via Adaptive Stochastic Gradient Langevin Algorithms ⋮ Convergence Proof for Central Force Optimization Algorithm and Application in Neural Networks ⋮ Convergence analysis of batch gradient algorithm for three classes of sigma-pi neural networks ⋮ Convergence results on stochastic adaptive learning ⋮ Stochastic generalized gradient methods for training nonconvex nonsmooth neural networks ⋮ Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation
This page was built for publication: The convergence of stochastic gradient algorithms applied to learning in neural networks