Hamilton-Jacobi-Bellman inequality for the average control of piecewise deterministic Markov processes
DOI10.1080/17442508.2018.1546305zbMath1502.90187OpenAlexW2900672501MaRDI QIDQ5087027
Oswaldo L. V. Costa, François Dufour
Publication date: 8 July 2022
Published in: Stochastics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/17442508.2018.1546305
continuous controlcontinuous-time Markov decision processpiecewise deterministic Markov processHamilton-Jacobi-Bellman inequalityaverage control
Continuous-time Markov processes on general state spaces (60J25) Dynamic programming in optimal control and differential games (49L20) Optimal stochastic control (93E20) Markov and semi-Markov decision processes (90C40)
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Calcul stochastique et problèmes de martingales
- The discounted method and equivalence of average criteria for risk-sensitive Markov decision processes on Borel spaces
- A survey of recent results on continuous-time Markov decision processes (with comments and rejoinder)
- Continuous-time Markov decision processes. Theory and applications
- On piecewise deterministic Markov control processes: Control of jumps and of risk processes in insurance
- Piecewise deterministic Markov control processes with feedback controls and unbounded costs
- Generalized Bellman-Hamilton-Jacobi optimality conditions for a control problem with a boundary condition
- The transformation method for continuous-time Markov decision processes
- Average optimality for continuous-time Markov decision processes with a policy iteration approach
- A Dynamic Programming Algorithm for the Optimal Control of Piecewise Deterministic Markov Processes
- Constrained and Unconstrained Optimal Discounted Control of Piecewise Deterministic Markov Processes
- New discount and average optimality conditions for continuous-time Markov decision processes
- Average Continuous Control of Piecewise Deterministic Markov Processes
- Discounted Continuous-Time Markov Decision Processes with Constraints: Unbounded Transition and Loss Rates
- The Vanishing Discount Approach for the Average Continuous Control of Piecewise Deterministic Markov Processes
- Zero-Sum Ergodic Stochastic Games with Feller Transition Probabilities
- Bellman inequalities in markov decision deterministic drift processes
- A Target Recognition Problem: Sequential Analysis and Optimal Control
- On Reducing a Jump Controllable Markov Model to a Model with Discrete Time
- Necessary and sufficient optimality conditions for control of piecewise deterministic markov processes
- Multivariate point processes: predictable projection, Radon-Nikodym derivatives, representation of martingales
- Conditions for optimality in dynamic programming and for the limit of n-stage optimal policies to be optimal
- Average Optimality in Dynamic Programming with General State Space
- Average optimality for continuous-time Markov decision processes under weak continuity conditions
- Error bounds for rolling horizon policies in discrete-time Markov control processes
This page was built for publication: Hamilton-Jacobi-Bellman inequality for the average control of piecewise deterministic Markov processes