On dynamic programming: Compactness of the space of policies
From MaRDI portal
Publication:1221981
DOI10.1016/0304-4149(75)90031-9zbMath0317.60025OpenAlexW2070935253WikidataQ126298569 ScholiaQ126298569MaRDI QIDQ1221981
Publication date: 1975
Published in: Stochastic Processes and their Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/0304-4149(75)90031-9
Markov and semi-Markov decision processes (90C40) Stochastic processes (60G99) Convergence of probability measures (60B10)
Related Items
Semicontinuous nonstationary stochastic games ⋮ Continuity Properties of Value Functions in Information Structures for Zero-Sum and General Games and Stochastic Teams ⋮ An equilibrium existence result for games with incomplete information and indeterminate outcomes ⋮ Perfect equilibria in games of incomplete information ⋮ Constrained discounted stochastic games ⋮ The Expected Total Cost Criterion for Markov Decision Processes under Constraints: A Convex Analytic Approach ⋮ Optimal Control of Piecewise Deterministic Markov Processes ⋮ Conditions for the solvability of the linear programming formulation for constrained discounted Markov decision processes ⋮ Bayesian learning and convergence to rational expectations ⋮ Optimal learning with costly adjustment ⋮ Stationary Markov Nash Equilibria for Nonzero-Sum Constrained ARAT Markov Games ⋮ Geometry of information structures, strategic measures and associated stochastic control topologies ⋮ Self-fulfilling expectations in stochastic processes of temporary equilibria ⋮ On compactness of the space of policies in stochastic dynamic programming ⋮ Markov Decision Processes with Incomplete Information and Semiuniform Feller Transition Probabilities ⋮ Semi-uniform Feller stochastic kernels ⋮ Zero-sum games involving teams against teams: existence of equilibria, and comparison and regularity in information ⋮ Equivalent conditions for weak continuity of nonlinear filters ⋮ The martingale problem method revisited ⋮ On the expected total reward with unbounded returns for Markov decision processes ⋮ Extreme Occupation Measures in Markov Decision Processes with an Absorbing State ⋮ Nash equilibria for total expected reward absorbing Markov games: the constrained and unconstrained cases ⋮ Absorbing Markov decision processes ⋮ A Universal Dynamic Program and Refined Existence Results for Decentralized Stochastic Control ⋮ A Convex Programming Approach for Discrete-Time Markov Decision Processes under the Expected Total Reward Criterion ⋮ Conditions for optimality in dynamic programming and for the limit of n-stage optimal policies to be optimal ⋮ Maximizing the probability of visiting a set infinitely often for a countable state space Markov decision process ⋮ Sufficiency of Deterministic Policies for Atomless Discounted and Uniformly Absorbing MDPs with Multiple Criteria ⋮ Compactness of the space of non-randomized policies in countable-state sequential decision processes ⋮ Essential stability of the alpha cores of finite games with incomplete information ⋮ Multiobjective Stopping Problem for Discrete-Time Markov Processes: Convex Analytic Approach ⋮ Constrained discounted Markov decision processes with Borel state spaces ⋮ Equilibria in infinite games of incomplete information ⋮ Constrained and Unconstrained Optimal Discounted Control of Piecewise Deterministic Markov Processes ⋮ On the Existence of Nash Equilibrium in Bayesian Games ⋮ Semicontinuous nonstationary stochastic games. II ⋮ Large deviations principle for discrete-time mean-field games ⋮ Constrained Markov Decision Processes with Expected Total Reward Criteria ⋮ Markov decision processes under ambiguity ⋮ Comparison of Information Structures for Zero-Sum Games and a Partial Converse to Blackwell Ordering in Standard Borel Spaces ⋮ Optimality, equilibrium, and curb sets in decision problems without commitment ⋮ Nowak's Theorem on Probability Measures Induced by Strategies Revisited ⋮ Convex analytic method revisited: further optimality results and performance of deterministic policies in average cost stochastic control ⋮ Multiple objective nonatomic Markov decision processes with total reward criteria ⋮ On maximizing the average time at a goal ⋮ Constrained Markovian decision processes: The dynamic programming approach ⋮ Existence of optimal policy for time non-homogeneous discounted Markovian decision programming ⋮ Strategic measures in optimal control problems for stochastic sequences
Cites Work
- Instationäre dynamische Optimierung bei schwachen Voraussetzungen über die Gewinnfunktionen
- Bayesian dynamic programming
- Markovian Decision Processes with Compact Action Spaces
- Discounted Dynamic Programming
- Negative Dynamic Programming
- Compactness in spaces of measures
- Compactness and sequential compactness in spaces of measures
- On continuous dynamic programming with discrete time-parameter
- An Extension of Wald's Theory of Statistical Decision Functions
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item