Reduced-dimensional reinforcement learning control using singular perturbation approximations
DOI10.1016/j.automatica.2020.109451zbMath1461.93330arXiv2004.14501OpenAlexW3023262556WikidataQ114204920 ScholiaQ114204920MaRDI QIDQ2664206
Sayak Mukherjee, Aranya Chakrabortty, He Bai
Publication date: 20 April 2021
Published in: Automatica (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2004.14501
singular perturbationlinear quadratic regulatormodel reductionreinforcement learningmodel-free control
Feedback control (93B52) Linear systems in control theory (93C05) Time-scale analysis and singular perturbations in control/observation systems (93C70) System structure simplification (93B11)
Related Items (5)
Cites Work
- Unnamed Item
- Computational adaptive optimal control for continuous-time linear systems with completely unknown dynamics
- Q-learning for continuous-time linear systems: A model-free infinite horizon optimal control approach
- Adaptive optimal control for continuous-time linear systems based on policy iteration
- Singular perturbations and order reduction in control theory - an overview
- Time scale modeling of sparse dynamic networks
- A decomposition of near-optimum regulators for systems with slow and fast modes
- Control strategies for decision makers using different models of the same system
- Robust Adaptive Dynamic Programming
This page was built for publication: Reduced-dimensional reinforcement learning control using singular perturbation approximations