Efficient off‐policy Q‐learning for multi‐agent systems by solving dual games
From MaRDI portal
Publication:6197958
DOI10.1002/rnc.7189OpenAlexW4390733893MaRDI QIDQ6197958
No author found.
Publication date: 20 March 2024
Published in: International Journal of Robust and Nonlinear Control (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1002/rnc.7189
Noncooperative games (91A10) 2-person games (91A05) Applications of game theory (91A80) Multi-agent systems (93A16)
Cites Work
- Multi-agent zero-sum differential graphical games for disturbance rejection in distributed control
- Motion synchronization in unmanned aircrafts formation control with communication delays
- Consensus of linear multi-agent systems based on full-order observer
- Cooperative adaptive optimal output regulation of nonlinear discrete-time multi-agent systems
- Data-driven optimal tracking control of discrete-time multi-agent systems with two-stage policy iteration algorithm
- \( \mathbb{Q} \)-learning algorithm in solving consensusability problem of discrete-time multi-agent systems
- Off-policy learning for adaptive optimal output synchronization of heterogeneous multi-agent systems
- Optimal consensus control for unknown second-order multi-agent systems: using model-free reinforcement learning method
- Q-learning solution for optimal consensus control of discrete-time multiagent systems using reinforcement learning
- Data-driven optimal tracking control of discrete-time linear systems with multiple delays via the value iteration algorithm
- Non-zero sum differential graphical game: cluster synchronisation for multi-agents with partially unknown dynamics
This page was built for publication: Efficient off‐policy Q‐learning for multi‐agent systems by solving dual games