Lyapunov stability analysis of gradient descent-learning algorithm in network training (Q420144)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Lyapunov stability analysis of gradient descent-learning algorithm in network training |
scientific article; zbMATH DE number 6036990
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Lyapunov stability analysis of gradient descent-learning algorithm in network training |
scientific article; zbMATH DE number 6036990 |
Statements
Lyapunov stability analysis of gradient descent-learning algorithm in network training (English)
0 references
21 May 2012
0 references
Summary: The Lyapunov stability theorem is applied to guarantee the convergence and stability of the learning algorithm for several networks. Gradient descent learning algorithm and its developed algorithms are one of the most useful learning algorithms in developing the networks. To guarantee the stability and convergence of the learning process, the upper bound of the learning rates should be investigated. Here, the Lyapunov stability theorem was developed and applied to several networks in order to guaranty the stability of the learning algorithm.
0 references
Lyapunov stability
0 references
network training
0 references
learning algorithm
0 references
0 references