Invariant problems in dynamic programming - average reward criterion
From MaRDI portal
Publication:1145070
DOI10.1016/0304-4149(80)90014-9zbMath0444.90102OpenAlexW2003480407MaRDI QIDQ1145070
Publication date: 1980
Published in: Stochastic Processes and their Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/0304-4149(80)90014-9
average reward criterionfinite action spacearbitrary state spacebeta-optimal policyinvariant problem
Related Items (5)
A note on risk-sensitive control of invariant models ⋮ Recurrence conditions for Markov decision processes with Borel state space: A survey ⋮ Robust Markov control processes ⋮ Average Cost Optimality Inequality for Markov Decision Processes with Borel Spaces and Universally Measurable Policies ⋮ Controlled semi-Markov models under long-run average rewards
Cites Work
- Unnamed Item
- Averaging vs. discounting in dynamic programming: a counterexample
- Linear Programming and Sequential Decisions
- Invariant problems in discounted dynamic programming
- Discrete Dynamic Programming
- Discounted Dynamic Programming
- Markovian Sequential Replacement Processes
- Denumerable State Markovian Decision Processes-Average Cost Criterion
- Negative Dynamic Programming
- Non-Discounted Denumerable Markovian Decision Models
- A Solution to a Countable System of Equations Arising in Markovian Decision Processes
- Arbitrary State Markovian Decision Processes
This page was built for publication: Invariant problems in dynamic programming - average reward criterion