On maximizing the average time at a goal
From MaRDI portal
Publication:792886
DOI10.1016/0304-4149(84)90010-3zbMath0537.90095OpenAlexW2030488046MaRDI QIDQ792886
Theodore P. Hill, Stephen Demko
Publication date: 1984
Published in: Stochastic Processes and their Applications (Search for Journal in Brave)
Full work available at URL: https://digitalcommons.calpoly.edu/rgp_rsr/55
infinite horizongamblingstationary strategyarbitrary decision setsaverage time spent at a goaldiscrete time decision processesfinite state spacegeneral average reward
Dynamic programming (90C39) Markov renewal processes, semi-Markov processes (60K15) Markov and semi-Markov decision processes (90C40)
Related Items (4)
The existence of good Markov strategies for decision processes with general payoffs ⋮ An expected average reward criterion ⋮ On the chance to visit a goal set infinitely often ⋮ Finite state Markov decision models with average reward criteria
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Decision processes with total-cost criteria
- On dynamic programming: Compactness of the space of policies
- Finite state Markovian decision processes
- On Sequential Decisions and Markov Chains
- A Controlled Finite Markov Chain with an Arbitrary Set of Decisions
- On the Existence of Good Markov Strategies
- Persistently ϵ-Optimal Strategies
- Discounted Dynamic Programming
- On the Existence of Good Stationary Strategies
- On the Existence of Stationary Optimal Strategies
- On Measurable Gambling Problems
- Optimal decision procedures for finite markov chains. Part I: Examples
This page was built for publication: On maximizing the average time at a goal