An axiomatic approach to Markov decision processes
From MaRDI portal
Publication:2699030
DOI10.1007/s00186-022-00806-9OpenAlexW4311111266MaRDI QIDQ2699030
Publication date: 26 April 2023
Published in: Mathematical Methods of Operations Research (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1701.02879
Dynamic programming (90C39) Applications of Markov chains and discrete-time Markov processes on general state spaces (social mobility, learning theory, industrial processes, etc.) (60J20) Statistical decision theory (62C99)
Cites Work
- Unnamed Item
- Unnamed Item
- Examples concerning Abel and Cesàro limits
- Risk-averse dynamic programming for Markov decision processes
- Criteria of optimality in the infinite-time optimal control problem
- Ordering infinite utility streams comes at the cost of a non-Ramsey set
- Generalized time-invariant overtaking
- An axiomatic approach to complete patience and time invariance
- A counterexample on overtaking optimality
- Intertemporal equity and the extension of the Ramsey criterion.
- Planning for the long run: programming with patient, Pareto responsive preferences
- Resolving distributional conflicts between generations
- Blackwell optimality in Markov decision processes with partial observation.
- A characterization of Cesàro average utility
- Utilitarianism for infinite utility streams: a new welfare criterion and its axiomatic characterization
- Sporadic overtaking optimality in Markov decision problems
- On the continuity of ethical social welfare orders on infinite utility streams
- Koopmans' constant discounting for intertemporal choice: A simplification and a generalization
- Examples in Markov Decision Processes
- Equity and the Informational Basis of Collective Choice
- Infinite-horizon Markov control processes with undiscounted cost criteria: from average to overtaking optimality
- Optimal Economic Growth and Uncertainty: The No Discounting Case
- The limit of discounted utilitarianism
- Discrete-Time Controlled Markov Processes with Average Cost Criterion: A Survey
- Average Utility Maximization: A Preference Foundation
- More Risk-Sensitive Markov Decision Processes
- Discrete Dynamic Programming
- Aggregating Infinite Utility Streams with InterGenerational Equity: The Impossibility of Being Paretian
- On the Iterative Method of Dynamic Programming on a Finite Space Discrete Time Markov Process
- On Finding Optimal Policies in Discrete Dynamic Programming with No Discounting
- Stationary Ordinal Utility and Impatience
- An Optimality Condition for Discrete Dynamic Programming with no Discounting
- Letter to the Editor—Criterion Equivalence in Discrete Dynamic Programming
- On Existence of Weakly Maximal Programmes in a Multi-Sector Economy
- An Axiomatic Basis for the Ramsey-Weizsacker Overtaking Criterion
- Handbook of Markov decision processes. Methods and applications