A Tour of Reinforcement Learning: The View from Continuous Control
From MaRDI portal
Publication:6303462
arXiv1806.09460MaRDI QIDQ6303462
Author name not available (Why is that?)
Publication date: 25 June 2018
Abstract: This manuscript surveys reinforcement learning from the perspective of optimization and control with a focus on continuous control applications. It surveys the general formulation, terminology, and typical experimental implementations of reinforcement learning and reviews competing solution paradigms. In order to compare the relative merits of various techniques, this survey presents a case study of the Linear Quadratic Regulator (LQR) with unknown dynamics, perhaps the simplest and best-studied problem in optimal control. The manuscript describes how merging techniques from learning theory and control can provide non-asymptotic characterizations of LQR performance and shows that these characterizations tend to match experimental behavior. In turn, when revisiting more complex applications, many of the observed phenomena in LQR persist. In particular, theory and experiment demonstrate the role and importance of models and the cost of generality in reinforcement learning algorithms. This survey concludes with a discussion of some of the challenges in designing learning systems that safely and reliably interact with complex and uncertain environments and how tools from reinforcement learning and control might be combined to approach these challenges.
Has companion code repository: https://github.com/matejker/controllability-of-complex-networks
This page was built for publication: A Tour of Reinforcement Learning: The View from Continuous Control
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6303462)