On the Optimization Landscape of Dynamic Output Feedback Linear Quadratic Control

From MaRDI portal
Publication:6389004

arXiv2201.09598MaRDI QIDQ6389004

Author name not available (Why is that?)

Publication date: 24 January 2022

Abstract: The optimization landscape of optimal control problems plays an important role in the convergence of many policy gradient methods. Unlike state-feedback Linear Quadratic Regulator (LQR), static output-feedback policies are typically insufficient to achieve good closed-loop control performance. We investigate the optimization landscape of linear quadratic control using dynamic output feedback policies, denoted as dynamic LQR (dLQR) in this paper. We first show that the dLQR cost varies with similarity transformations. We then derive an explicit form of the optimal similarity transformation for a given observable stabilizing controller. We further characterize the unique observable stationary point of dLQR. This provides an optimality certificate for policy gradient methods under mild assumptions. Finally, we discuss the differences and connections between dLQR and the canonical linear quadratic Gaussian (LQG) control. These results shed light on designing policy gradient algorithms for decision-making problems with partially observed information.




Has companion code repository: https://github.com/soc-ucsd/lqg_gradient








This page was built for publication: On the Optimization Landscape of Dynamic Output Feedback Linear Quadratic Control

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6389004)