\(\mathcal{H}_\infty\) tracking learning control for discrete-time Markov jump systems: a parallel off-policy reinforcement learning
From MaRDI portal
Publication:6177279
DOI10.1016/j.jfranklin.2023.10.008zbMath1530.93100OpenAlexW4387537376MaRDI QIDQ6177279
Jing Wang, Jian-Wei Xia, Xiangyong Chen, Xuewen Zhang, Hao Shen
Publication date: 17 January 2024
Published in: Journal of the Franklin Institute (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jfranklin.2023.10.008
Discrete-time control/observation systems (93C55) (H^infty)-control (93B36) Linear systems in control theory (93C05) Stochastic systems in control theory (general) (93E03)
Cites Work
- Neural network approach to continuous-time direct adaptive optimal control for partially unknown nonlinear systems
- Reinforcement \(Q\)-learning for optimal tracking control of linear discrete-time systems with unknown dynamics
- \(\mathrm{H}_\infty\) control of linear discrete-time systems: off-policy reinforcement learning
- Robust \(H_{\infty}\) filtering for a class of nonlinear discrete-time Markovian jump systems
- Output feedback Q-learning for discrete-time linear zero-sum games with application to the \(H_\infty\) control
- A Markov jump model approach to reliable event-triggered retarded dynamic output feedback \(\mathcal{H}_\infty\) control for networked systems
- Optimal control of variable-speed wind turbines modeled as Markov jump systems
- Adaptive fuzzy sliding-mode consensus control of nonlinear under-actuated agents in a near-optimal reinforcement learning framework
- Self-triggered control of probabilistic Boolean control networks: a reinforcement learning approach
- Improved fuzzy control design for nonlinear Markovian-jump systems with incomplete transition descriptions
- Fault-tolerant tracking control based on reinforcement learning with application to a steer-by-wire system
- Reinforcement Learning with Restrictions on the Action Set
- Linear Quadratic Tracking Control of Partially-Unknown Continuous-Time Systems Using Reinforcement Learning
- Dynamic Output Feedback Control of Discrete-Time Markov Jump Linear Systems through Linear Matrix Inequalities
- Stochastic stability properties of jump linear systems
- SMC Design for Robust Stabilization of Nonlinear Markovian Jump Singular Systems
- Data-Driven Model-Free Controllers
- Simple and Optimal Methods for Stochastic Variational Inequalities, II: Markovian Noise and Policy Evaluation in Reinforcement Learning
- Reinforcement Learning and Feedback Control: Using Natural Decision Methods to Design Optimal Adaptive Controllers
- Online optimal tracking control of continuous-time linear systems with unknown dynamics by using adaptive dynamic programming
- Dissipativity-based finite-time asynchronous output feedback control for wind turbine system via a hidden Markov model
- Adaptive neural dissipative control for Markovian jump cyber-physical systems against sensor and actuator attacks
This page was built for publication: \(\mathcal{H}_\infty\) tracking learning control for discrete-time Markov jump systems: a parallel off-policy reinforcement learning