A Hybrid Stochastic Policy Gradient Algorithm for Reinforcement Learning

From MaRDI portal
Publication:6335899

arXiv2003.00430MaRDI QIDQ6335899

Quoc Tran-Dinh, Lam M. Nguyen, Dzung T. Phan, Marten van Dijk, Phuong Ha Nguyen, Nhan H. Pham

Publication date: 1 March 2020

Abstract: We propose a novel hybrid stochastic policy gradient estimator by combining an unbiased policy gradient estimator, the REINFORCE estimator, with another biased one, an adapted SARAH estimator for policy optimization. The hybrid policy gradient estimator is shown to be biased, but has variance reduced property. Using this estimator, we develop a new Proximal Hybrid Stochastic Policy Gradient Algorithm (ProxHSPGA) to solve a composite policy optimization problem that allows us to handle constraints or regularizers on the policy parameters. We first propose a single-looped algorithm then introduce a more practical restarting variant. We prove that both algorithms can achieve the best-known trajectory complexity mathcalOleft(varepsilon3ight) to attain a first-order stationary point for the composite problem which is better than existing REINFORCE/GPOMDP mathcalOleft(varepsilon4ight) and SVRPG mathcalOleft(varepsilon10/3ight) in the non-composite setting. We evaluate the performance of our algorithm on several well-known examples in reinforcement learning. Numerical results show that our algorithm outperforms two existing methods on these examples. Moreover, the composite settings indeed have some advantages compared to the non-composite ones on certain problems.




Has companion code repository: https://github.com/unc-optimization/ProxHSPGA








This page was built for publication: A Hybrid Stochastic Policy Gradient Algorithm for Reinforcement Learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6335899)