Regular Policies in Abstract Dynamic Programming
From MaRDI portal
Publication:5348471
DOI10.1137/16M1090946zbMath1372.49032arXiv1609.03115MaRDI QIDQ5348471
Publication date: 18 August 2017
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1609.03115
shortest pathpolicy iterationvalue iterationdiscrete-time optimal controlabstract dynamic programming
Dynamic programming in optimal control and differential games (49L20) Dynamic programming (90C39) Markov and semi-Markov decision processes (90C40) Existence theories for optimal control problems involving relations other than differential equations (49J21)
Related Items (2)
Improved value iteration for neural-network-based stochastic optimal control design ⋮ Stable Optimal Control and Semicontractive Dynamic Programming
Cites Work
- Stochastic optimal control. The discrete time case
- Finite state Markovian decision processes
- Robust shortest path planning and semicontractive dynamic programming
- Stable Optimal Control and Semicontractive Dynamic Programming
- A Mixed Value and Policy Iteration Method for Stochastic Control with Universally Measurable Policies
- An Analysis of Stochastic Shortest Path Problems
- Monotone Mappings with Application in Dynamic Programming
- Stochastic Shortest Path Games
- Risk-Averse Control of Undiscounted Transient Markov Models
- On Convergence of Value Iteration for a Class of Total Cost Markov Decision Processes
- Negative Dynamic Programming
- Contraction Mappings in the Theory Underlying Dynamic Programming
- On terminating Markov decision processes with a risk-averse objective function
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: Regular Policies in Abstract Dynamic Programming