Finite Linear Programming Approximations of Constrained Discounted Markov Decision Processes

From MaRDI portal
Publication:2840138

DOI10.1137/120867925zbMath1282.90228OpenAlexW2092720955MaRDI QIDQ2840138

Tomás Prieto-Rumeau, François Dufour

Publication date: 17 July 2013

Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1137/120867925




Related Items (17)

A convex optimization approach to dynamic programming in continuous state and action spacesComputable approximations for continuous-time Markov decision processes on Borel spaces based on empirical measuresNonasymptotic Analysis of Monte Carlo Tree SearchFrom Infinite to Finite Programs: Explicit Error Bounds with Applications to Approximate Dynamic ProgrammingAsymptotic optimality and rates of convergence of quantized stationary policies in continuous-time Markov decision processesNear optimality of quantized policies in stochastic control under weak continuity conditionsMarkov decision processes with burstiness constraintsContinuity of cost in Borkar control topology and implications on discrete space and time approximations for controlled diffusions under several criteriaAn exact iterative search algorithm for constrained Markov decision processesLight robustness in the optimization of Markov decision processes with uncertain parametersDual-based methods for solving infinite-horizon nonstationary deterministic dynamic programsComputable approximations for average Markov decision processes in continuous timeUnnamed ItemStochastic approximations of constrained discounted Markov decision processesApproximation of average cost Markov decision processes using empirical distributions and concentration inequalitiesFirst passage Markov decision processes with constraints and varying discount factorsA stability result for linear Markovian stochastic optimization problems







This page was built for publication: Finite Linear Programming Approximations of Constrained Discounted Markov Decision Processes