Approximate Midpoint Policy Iteration for Linear Quadratic Control
From MaRDI portal
Publication:6354765
arXiv2011.14212MaRDI QIDQ6354765
Tyler Summers, Benjamin Gravell, Iman Shames
Publication date: 28 November 2020
Abstract: We present a midpoint policy iteration algorithm to solve linear quadratic optimal control problems in both model-based and model-free settings. The algorithm is a variation of Newton's method, and we show that in the model-based setting it achieves cubic convergence, which is superior to standard policy iteration and policy gradient algorithms that achieve quadratic and linear convergence, respectively. We also demonstrate that the algorithm can be approximately implemented without knowledge of the dynamics model by using least-squares estimates of the state-action value function from trajectory data, from which policy improvements can be obtained. With sufficient trajectory data, the policy iterates converge cubically to approximately optimal policies, and this occurs with the same available sample budget as the approximate standard policy iteration. Numerical experiments demonstrate effectiveness of the proposed algorithms.
Has companion code repository: https://github.com/TSummersLab/midpoint-policy-iteration
This page was built for publication: Approximate Midpoint Policy Iteration for Linear Quadratic Control
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6354765)