Convergence and error bounds for passive stochastic algorithms using vanishing step size (Q1917662)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Convergence and error bounds for passive stochastic algorithms using vanishing step size |
scientific article; zbMATH DE number 897966
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Convergence and error bounds for passive stochastic algorithms using vanishing step size |
scientific article; zbMATH DE number 897966 |
Statements
Convergence and error bounds for passive stochastic algorithms using vanishing step size (English)
0 references
27 October 1996
0 references
The main goal of this work is to investigate the asymptotic properties of passive stochastic approximation algorithms with decreasing step size and window width. We obtain sufficient conditions for with probability one (w.p.1) convergence and we derive upper bounds of the estimation errors under rather general conditions. Various applications in control theory and optimization often require to find the solution of \(\overline f(x) = 0\) for some nonlinear function \(\overline f(\cdot)\). Very often either the form of the function is quite complex or it is not available at all; only noisy measurements are available. Thus one is forced to use methods such as stochastic approximation to resolve the problems. In many situations, however, both the input and output are subject to errors and the standard or traditional stochastic approximation methods are not applicable. Nevertheless, these problems fit into the framework of passive stochastic approximation, where unlike the traditional approach, passive strategies rather than active strategies are employed. The remainder of the paper is arranged as follows. The main assumptions and conditions, as well as statements of results, are gathered next. In Section 3 we present the proof of w.p.1 convergence. This is divided into two steps. In the first step, we obtain the w.p.1 boundedness of the iterates and in the second step, we establish the desired convergence property. In Section 4 we derive the upper bounds on the estimation errors. The main analytical techniques employed are perturbed Lyapunov function methods. Finally, some further remarks are made in Section 5.
0 references
decreasing window width
0 references
upper bounds of estimation errors
0 references
asymptotic properties
0 references
passive stochastic approximation algorithms
0 references
decreasing step size
0 references
perturbed Lyapunov function methods
0 references