Optimal control of Markovian jump processes with partial information and applications to a parallel queueing model
DOI10.1007/S00186-009-0284-7zbMath1177.93102OpenAlexW2044274945MaRDI QIDQ1044219
Publication date: 11 December 2009
Published in: Mathematical Methods of Operations Research (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s00186-009-0284-7
MDPfilter processgeneralized Hamilton-Jacobi-Bellman equationMarkovian jump processparallel queueingstochastic control problem with partial information
Queues and service in operations research (90B22) Control/observation systems with incomplete information (93C41) Optimal stochastic control (93E20)
Related Items (10)
Cites Work
- Stochastic optimal control. The discrete time case
- On the two-armed bandit problem with non-observed Poissonian switching of arms.
- Exact solution of the Bellman equation for a \(\beta\)-discounted reward in a two-armed bandit with switching arms
- ADMISSION CONTROL WITH INCOMPLETE INFORMATION TO A FINITE BUFFER QUEUE
- Admission Control with Incomplete Information of a Queueing System
- Optimization and nonsmooth analysis
- Applied Probability and Queues
- PORTFOLIO OPTIMIZATION WITH JUMPS AND UNOBSERVABLE INTENSITY PROCESS
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: Optimal control of Markovian jump processes with partial information and applications to a parallel queueing model