An approximation method for stochastic control problems with partial observation of the state - a method for constructing \(\in\)-optimal controls
From MaRDI portal
Publication:1108258
zbMath0653.93069MaRDI QIDQ1108258
Wolfgang J. Runggaldier, Alain Bensoussan
Publication date: 1987
Published in: Acta Applicandae Mathematicae (Search for Journal in Brave)
stochastic approximationdynamic programming techniquescontinuous-time stochastic control problem with partial observationsrandom measure transformation
Stochastic ordinary differential equations (aspects of stochastic analysis) (60H10) Dynamic programming in optimal control and differential games (49L20) Signal detection and filtering (aspects of stochastic processes) (60G35) Optimal stochastic control (93E20) Random measures (60G57) Existence of optimal solutions to problems involving randomness (49J55)
Related Items
On the construction of \(\epsilon\)-optimal strategies in partially observed MDPs, Discrete dynamic programming and viscosity solutions of the Bellman equation, Discrete-Time Approximation of Stochastic Optimal Control with Partial Observation, Approximations for discrete-time adaptive control: Construction of \(\varepsilon\)-optimal controls, Concepts and methods for discrete and continuous time control under uncertainty, A remark on control of partially observed Markov chains