Infinite horizon average cost optimality criteria for mean-field control
From MaRDI portal
Publication:6622712
DOI10.1137/23m1603649MaRDI QIDQ6622712
Erhan Bayraktar, Ali Devran Kara
Publication date: 22 October 2024
Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)
Cites Work
- Unnamed Item
- Unnamed Item
- Stationary anonymous sequential games with undiscounted rewards
- On solutions of mean field games with ergodic cost
- Mean field games
- Ergodic mean field games with Hörmander diffusions
- Adaptive Markov control processes
- Long time average of mean field games
- Solvability of infinite horizon McKean-Vlasov FBSDEs in mean field control problems and games
- McKean-Vlasov optimal control: the dynamic programming principle
- Mean-field Markov decision processes with common noise and open-loop controls
- Unified reinforcement Q-learning for mean field game and control problems
- Discrete-time ergodic mean-field games with average reward on compact spaces
- Value iteration algorithm for mean-field games
- On the existence of classical solutions for stationary extended mean field games
- Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle
- The derivation of ergodic mean field game equations for several populations of players
- Mean field games models -- a brief survey
- Numerical resolution of McKean-Vlasov FBSDEs using neural networks
- Mean field Markov decision processes
- Stationary focusing mean-field games
- Probabilistic Analysis of Mean-Field Games
- Risk-Sensitive Mean-Field Games
- Convergence of Dynamic Programming Models
- Markov--Nash Equilibria in Mean-Field Games with Discounted Cost
- Randomized dynamic programming principle and Feynman-Kac representation for optimal control of McKean-Vlasov dynamics
- Discrete-Time Controlled Markov Processes with Average Cost Criterion: A Survey
- Discrete-time average-cost mean-field games on Polish spaces
- Convergence Analysis of Machine Learning Algorithms for the Numerical Solution of Mean Field Control and Games I: The Ergodic Case
- Mean-Field Controls with Q-Learning for Cooperative MARL: Convergence and Complexity Analysis
- Mean-Field Limit for a Class of Stochastic Ergodic Control Problems
- Approximate Nash Equilibria in Partially Observed Stochastic Games with Mean-Field Interactions
- Linear-Quadratic $N$-person and Mean-Field Games with Ergodic Cost
- Mean-field optimal control as Gamma-limit of finite agent controls
- Large-Population Cost-Coupled LQG Problems With Nonuniform Agents: Individual-Mass Behavior and Decentralized $\varepsilon$-Nash Equilibria
- Mean Field Games and Mean Field Type Control Theory
- Limit Theory for Controlled McKean--Vlasov Dynamics
- Dynamic Programming for Optimal Control of Stochastic McKean--Vlasov Dynamics
- Optimal Policies for Convex Symmetric Stochastic Dynamic Teams and their Mean-Field Limit
- McKean–Vlasov Optimal Control: Limit Theory and Equivalence Between Different Formulations
- Dynamic programming for mean-field type control
- Quantitative propagation of chaos for mean field Markov decision process with common noise
- Optimality of Independently Randomized Symmetric Policies for Exchangeable Stochastic Teams with Infinitely Many Decision Makers
- Model-free mean-field reinforcement learning: mean-field MDP and mean-field Q-learning
- Q-learning in regularized mean-field games
- Ergodic control of McKean-Vlasov SDEs and associated Bellman equation
- Mean field control and finite agent approximation for regime-switching jump diffusions
- Dynamic Programming Principles for Mean-Field Controls with Learning
This page was built for publication: Infinite horizon average cost optimality criteria for mean-field control