Learning Stabilizing Controllers of Linear Systems via Discount Policy Gradient

From MaRDI portal
Publication:6385954

arXiv2112.09294MaRDI QIDQ6385954

Author name not available (Why is that?)

Publication date: 16 December 2021

Abstract: Stability is one of the most fundamental requirements for systems synthesis. In this paper, we address the stabilization problem for unknown linear systems via policy gradient (PG) methods. We leverage a key feature of PG for Linear Quadratic Regulator (LQR), i.e., it drives the policy away from the boundary of the unstabilizing region along the descent direction, provided with an initial policy with finite cost. To this end, we discount the LQR cost with a factor, by adaptively increasing which gradient leads the policy to the stabilizing set while maintaining a finite cost. Based on the Lyapunov theory, we design an update rule for the discount factor which can be directly computed from data, rendering our method purely model-free. Compared to recent work citep{perdomo2021stabilizing}, our algorithm allows the policy to be updated only once for each discount factor. Moreover, the number of sampled trajectories and simulation time for gradient descent is significantly reduced to mathcalO(log(1/epsilon)) for the desired accuracy epsilon. Finally, we conduct simulations on both small-scale and large-scale examples to show the efficiency of our discount PG method.




Has companion code repository: https://github.com/fuxy16/stabilize-via-pg








This page was built for publication: Learning Stabilizing Controllers of Linear Systems via Discount Policy Gradient

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6385954)