Optimistic Distributionally Robust Policy Optimization

From MaRDI portal
Publication:6342854

arXiv2006.07815MaRDI QIDQ6342854

Author name not available (Why is that?)

Publication date: 14 June 2020

Abstract: Trust Region Policy Optimization (TRPO) and Proximal Policy Optimization (PPO), as the widely employed policy based reinforcement learning (RL) methods, are prone to converge to a sub-optimal solution as they limit the policy representation to a particular parametric distribution class. To address this issue, we develop an innovative Optimistic Distributionally Robust Policy Optimization (ODRPO) algorithm, which effectively utilizes Optimistic Distributionally Robust Optimization (DRO) approach to solve the trust region constrained optimization problem without parameterizing the policies. Our algorithm improves TRPO and PPO with a higher sample efficiency and a better performance of the final policy while attaining the learning stability. Moreover, it achieves a globally optimal policy update that is not promised in the prevailing policy based RL algorithms. Experiments across tabular domains and robotic locomotion tasks demonstrate the effectiveness of our approach.




Has companion code repository: https://github.com/kadysongbb/dr-trpo








This page was built for publication: Optimistic Distributionally Robust Policy Optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6342854)