Markov decision processes with a new optimality criterion: Discrete time

From MaRDI portal
Publication:2560097

DOI10.1214/aos/1176342415zbMath0259.90054OpenAlexW2033436114MaRDI QIDQ2560097

Stratton C. Jaquette

Publication date: 1973

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1214/aos/1176342415




Related Items (23)

Risk measurement and risk-averse control of partially observable discrete-time Markov systemsUnnamed ItemMarkov decision processes with a minimum-variance criterionA variance minimization problem for a Markov decision processOptimality equations and inequalities in a class of risk-sensitive average cost Markov decision chainsMarkov Decision Problems Where Means Bound VariancesRisk-averse dynamic programming for Markov decision processesMarkov decision processes with average-value-at-risk criteriaDistorted probability operator for dynamic portfolio optimization in times of socio-economic crisisApproximate solutions to constrained risk-sensitive Markov decision processesUnnamed ItemA note on negative dynamic programming for risk-sensitive controlDiscounted Markov decision processes with utility constraintsOn the total reward variance for continuous-time Markov reward chainsDynamic programming of expectation and varianceRisk-sensitive dividend problemsUnnamed ItemSolutions of the average cost optimality equation for finite Markov decision chains: Risk-sensitive and risk-neutral criteriaSemi-Markov decision processes with variance minimization criterionOn mean reward variance in semi-Markov processesComputational Methods for Risk-Averse Undiscounted Transient Markov ModelsProcess-based risk measures and risk-averse control of discrete-time systemsNotes on average Markov decision processes with a minimum-variance criterion




This page was built for publication: Markov decision processes with a new optimality criterion: Discrete time