A numerical analysis of stochastic neural networks (Q2746464)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: A numerical analysis of stochastic neural networks |
scientific article; zbMATH DE number 1656108
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | A numerical analysis of stochastic neural networks |
scientific article; zbMATH DE number 1656108 |
Statements
23 June 2002
0 references
stochastic neural networks
0 references
stability
0 references
error bound
0 references
theta method
0 references
stochastic differential equation
0 references
Euler method
0 references
Milstein method
0 references
dissipative
0 references
A numerical analysis of stochastic neural networks (English)
0 references
A differential equation modeling neural networks is generalized, by incorporating randomness in its inputs, to become the stochastic differential equation NEWLINE\[NEWLINEdX(t)= [C- BX(t)+ Ag(X(t))] dt+ \sum^n_{m=1} \sigma_m(X(t)) d\beta_m(t),\tag{1}NEWLINE\]NEWLINE where \(\sigma\) is an \(n\times n\) matrix and \(\beta\) is an \(n\)-dimensional standard Brownian motion. The orders of the mean-square error of the numerical solutions of equation (1) using the Euler method, the Milstein method, and the Euler \(\theta\)-scheme are derived. Mean-square boundedness of the Milstein and the Euler \(\theta\)-scheme solutions is proved. After chainging \(\sigma\) and \(g\) in equation (1) so that each is a function of \((t,X(t))\), theorems establishing conditions under which the solution \(X(t)\) is mean-square dissipative and mean-square stable are proved. Finally, approximations of additive functionals of the solution \(X(t)\) of equation (1) when \(n= 1\) are derived and mean-square error order and mean-square boundedness are established.
0 references