Open Problem—Convergence and Asymptotic Optimality of the Relative Value Iteration in Ergodic Control
From MaRDI portal
Publication:5113902
DOI10.1287/stsy.2019.0040zbMath1447.93370OpenAlexW2973929671MaRDI QIDQ5113902
Publication date: 18 June 2020
Published in: Stochastic Systems (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1287/stsy.2019.0040
Optimal stochastic control (93E20) Markov and semi-Markov decision processes (90C40) Networked control (93B70)
Related Items (2)
Introduction to the Applied Probability Society’s “Open Problems in Applied Probability” Session at the INFORMS Annual Meeting, Phoenix, Arizona, November 4–7, 2018 ⋮ On the relative value iteration with a risk-sensitive criterion
Cites Work
- Large time asymptotic problems for optimal stochastic control with superlinear cost
- On solutions of mean field games with ergodic cost
- Subgeometric rates of convergence of \(f\)-ergodic strong Markov processes
- A note on the convergence rate of the value iteration scheme in controlled Markov chains
- Illustrated review of convergence conditions of the value iteration algorithm and the rolling horizon procedure for average-cost MDPs
- Value iteration and optimization of multiclass queueing networks
- Value iteration in average cost Markov control processes on Borel spaces
- Dynamic programming, Markov chains, and the method of successive approximations
- Relative Value Iteration for Stochastic Differential Games
- Value Iteration in a Class of Communicating Markov Decision Chains with the Average Cost Criterion
- Error bounds for rolling horizon policies in discrete-time Markov control processes
- Convergence of the Relative Value Iteration for the Ergodic Control Problem of Nondegenerate Diffusions under Near-Monotone Costs
- On Convergence of Value Iteration for a Class of Total Cost Markov Decision Processes
This page was built for publication: Open Problem—Convergence and Asymptotic Optimality of the Relative Value Iteration in Ergodic Control