The value iteration algorithm is not strongly polynomial for discounted dynamic programming
From MaRDI portal
Publication:1667204
DOI10.1016/j.orl.2013.12.011zbMath1408.90307arXiv1312.6832OpenAlexW2040228279MaRDI QIDQ1667204
Jefferson Huang, Eugene A. Feinberg
Publication date: 27 August 2018
Published in: Operations Research Letters (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1312.6832
Related Items (10)
Complexity bounds for approximately solving discounted MDPs by value iterations ⋮ The stochastic shortest path problem: a polyhedral combinatorics perspective ⋮ Formalization of methods for the development of autonomous artificial intelligence systems ⋮ Uniform Turnpike Theorems for Finite Markov Decision Processes ⋮ Randomized Linear Programming Solves the Markov Decision Problem in Nearly Linear (Sometimes Sublinear) Time ⋮ Value set iteration for two-person zero-sum Markov games ⋮ Modified policy iteration algorithms are not strongly polynomial for discounted dynamic programming ⋮ Improved bound on the worst case complexity of policy iteration ⋮ Variance Reduced Value Iteration and Faster Algorithms for Solving Markov Decision Processes ⋮ On the reduction of total‐cost and average‐cost MDPs to discounted MDPs
Cites Work
This page was built for publication: The value iteration algorithm is not strongly polynomial for discounted dynamic programming