Sample Efficient Policy Gradient Methods with Recursive Variance Reduction
From MaRDI portal
Publication:6325577
arXiv1909.08610MaRDI QIDQ6325577
Author name not available (Why is that?)
Publication date: 18 September 2019
Abstract: Improving the sample efficiency in reinforcement learning has been a long-standing research problem. In this work, we aim to reduce the sample complexity of existing policy gradient methods. We propose a novel policy gradient algorithm called SRVR-PG, which only requires episodes to find an -approximate stationary point of the nonconcave performance function (i.e., such that ). This sample complexity improves the existing result for stochastic variance reduced policy gradient algorithms by a factor of . In addition, we also propose a variant of SRVR-PG with parameter exploration, which explores the initial policy parameter from a prior probability distribution. We conduct numerical experiments on classic control problems in reinforcement learning to validate the performance of our proposed algorithms.
Has companion code repository: https://github.com/xgfelicia/SRVRPG
This page was built for publication: Sample Efficient Policy Gradient Methods with Recursive Variance Reduction
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6325577)