Finite-horizon Q-learning for discrete-time zero-sum games with application to \(H_{\infty}\) control
From MaRDI portal
Publication:6581158
DOI10.1002/asjc.3027MaRDI QIDQ6581158
Dan-Dan Li, Mingxiang Liu, Qianqian Cai, Fu, Minyue, Wei Meng
Publication date: 30 July 2024
Published in: Asian Journal of Control (Search for Journal in Brave)
Cites Work
- Unnamed Item
- Unnamed Item
- Reinforcement \(Q\)-learning for optimal tracking control of linear discrete-time systems with unknown dynamics
- \(\mathrm{H}_\infty\) control of linear discrete-time systems: off-policy reinforcement learning
- Robust optimal control for finite-horizon zero-sum differential games via a plug-n-play event-triggered scheme
- Model-free \(Q\)-learning designs for linear discrete-time zero-sum games with application to \(H^\infty\) control
- Output feedback Q-learning for discrete-time linear zero-sum games with application to the \(H_\infty\) control
- Discrete-time \(H_{\infty}\) preview control problem in finite horizon
- \({\mathcal Q}\)-learning
- Adaptive dynamic programming for online solution of a zero-sum differential game
- An Iterative Data-Driven Linear Quadratic Method to Solve Nonlinear Discrete-Time Tracking Problems
- Approximate Dynamic Programming
- Stability Analysis of Discrete-Time Infinite-Horizon Optimal Control With Discounted Cost
- \(H^ \infty\)-optimal control and related minimax design problems. A dynamic game approach.
- Robust Output Regulation and Reinforcement Learning-Based Output Tracking Design for Unknown Linear Discrete-Time Systems
- Data‐based robust optimal control of discrete‐time systems with uncertainties via adaptive dynamic programming
- Q-learning for noise covariance adaptation in extended Kalman filter
This page was built for publication: Finite-horizon Q-learning for discrete-time zero-sum games with application to \(H_{\infty}\) control