Average Optimality in Markov Control Processes via Discounted-Cost Problems and Linear Programming
From MaRDI portal
Publication:4874952
DOI10.1137/S0363012993245306zbMath0853.93106OpenAlexW2012263782MaRDI QIDQ4874952
Onésimo Hernández-Lerma, Jean-Bernard Lasserre
Publication date: 11 June 1996
Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/s0363012993245306
gapdualitylinear programaverage-cost Markov control processnon-compact constraint setstochastic kernel on Borel state spacevanishing discounted approach
Related Items (40)
Partial hedging of American options in discrete time and complete markets: convex duality and optimal Markov policies ⋮ Long run risk sensitive portfolio with general factors ⋮ The Expected Total Cost Criterion for Markov Decision Processes under Constraints: A Convex Analytic Approach ⋮ A new learning algorithm for optimal stopping ⋮ Nonuniqueness versus uniqueness of optimal policies in convex discounted Markov decision processes ⋮ Impulsive control for continuous-time Markov decision processes: a linear programming approach ⋮ Randomized and Relaxed Strategies in Continuous-Time Markov Decision Processes ⋮ Minimum Average Value-at-Risk for Finite Horizon Semi-Markov Decision Processes in Continuous Time ⋮ Convex analytic approach to constrained discounted Markov decision processes with non-constant discount factors ⋮ A semimartingale characterization of average optimal stationary policies for Markov decision processes ⋮ Verification of General Markov Decision Processes by Approximate Similarity Relations and Policy Refinement ⋮ Distorted probability operator for dynamic portfolio optimization in times of socio-economic crisis ⋮ The transformation method for continuous-time Markov decision processes ⋮ Continuous-Time Markov Decision Processes with Exponential Utility ⋮ Robustness to Incorrect System Models in Stochastic Control ⋮ Exponential Convergence and Stability of Howard's Policy Improvement Algorithm for Controlled Diffusions ⋮ Extreme Occupation Measures in Markov Decision Processes with an Absorbing State ⋮ Fatou's Lemma in Its Classical Form and Lebesgue's Convergence Theorems for Varying Measures with Applications to Markov Decision Processes ⋮ Performance analysis for controlled semi-Markov systems with application to maintenance ⋮ On Some Impulse Control Problems with Constraint ⋮ Mean field Markov decision processes ⋮ Robustness to Approximations and Model Learning in MDPs and POMDPs ⋮ On Finite Approximations to Markov Decision Processes with Recursive and Nonlinear Discounting ⋮ Markov Processes with Restart ⋮ Finite approximation of the first passage models for discrete-time Markov decision processes with varying discount factors ⋮ Continuous-time Markov decision processes with state-dependent discount factors ⋮ Asymptotic Normality of Discrete-Time Markov Control Processes ⋮ Unnamed Item ⋮ Average optimality for continuous-time Markov decision processes in Polish spaces ⋮ Multiobjective Stopping Problem for Discrete-Time Markov Processes: Convex Analytic Approach ⋮ New discount and average optimality conditions for continuous-time Markov decision processes ⋮ Bounds for the Ruin Probability of a Discrete-Time Risk Process ⋮ Characterizations of overtaking optimality for controlled diffusion processes ⋮ The Vanishing Discount Approach for the Average Continuous Control of Piecewise Deterministic Markov Processes ⋮ Optimality of Mixed Policies for Average Continuous-Time Markov Decision Processes with Constraints ⋮ Absorbing Continuous-Time Markov Decision Processes with Total Cost Criteria ⋮ The Expected Total Cost Criterion for Markov Decision Processes under Constraints ⋮ Invariant measures for multidimensional fractional stochastic volatility models ⋮ Robust utility maximization of terminal wealth with drift and volatility uncertainty ⋮ On gradual-impulse control of continuous-time Markov decision processes with exponential utility
This page was built for publication: Average Optimality in Markov Control Processes via Discounted-Cost Problems and Linear Programming