Concepts and methods for discrete and continuous time control under uncertainty
From MaRDI portal
Publication:1265914
DOI10.1016/S0167-6687(98)00006-7zbMath0916.93085OpenAlexW2069374190MaRDI QIDQ1265914
Publication date: 19 July 1999
Published in: Insurance Mathematics \& Economics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/s0167-6687(98)00006-7
dynamic programmingstochastic optimal controlfinite horizoncontrolled Markov chainstransforms of the payoff function
Dynamic programming in optimal control and differential games (49L20) Optimal stochastic control (93E20)
Related Items (8)
Risk measurement and risk-averse control of partially observable discrete-time Markov systems ⋮ OPTIMAL PORTFOLIO CONSTRUCTION UNDER PARTIAL INFORMATION FOR A BALANCED FUND ⋮ An investigation of the theory of bank portfolio allocation within a discrete stochastic framework using optimal control techniques ⋮ Bayesian optimal control for a non-autonomous stochastic discrete time system ⋮ The linear-quadratic stochastic optimal control problem with random horizon at the finite number of infinitesimal events ⋮ Robust analysis for premium-reserve models in a stochastic nonlinear discrete-time varying framework ⋮ A multilevel approach for stochastic nonlinear optimal control ⋮ Process-based risk measures and risk-averse control of discrete-time systems
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Probability methods for approximations in stochastic control and for elliptic equations
- Dynamic programming and stochastic control
- Logarithmic transformations for discrete-time, finite-horizon stochastic control problems
- An approximation method for stochastic control problems with partial observation of the state - a method for constructing \(\in\)-optimal controls
- Explicit solutions for multivariate, discrete-time control problems under uncertainty
- Connections between stochastic control and dynamic games
- On dynamic programming for sequential decision problems under a general form of uncertainty
- A mathematical theory of hints. An approach to the Dempster-Shafer theory of evidence
- Numerical aspects of monotone approximations in convex stochastic control problems
- Successive approximation methods for the solution of optimal control problems
- Designing approximation schemes for stochastic optimization problems, in particular for stochastic programs with recourse
- On the construction of nearly optimal strategies for a general problem of control of partially observed diffusions
- Convergence of discretization procedures in dynamic programming
- Approximations of Dynamic Programs, I
- Approximations of Dynamic Programs, II
- An Approach to Discrete-Time Stochastic Control Problems under Partial Observation
- Optimal Continuous-Parameter Stochastic Control
This page was built for publication: Concepts and methods for discrete and continuous time control under uncertainty