Bregman Gradient Policy Optimization

From MaRDI portal
Publication:6371014

arXiv2106.12112MaRDI QIDQ6371014

Author name not available (Why is that?)

Publication date: 22 June 2021

Abstract: In the paper, we design a novel Bregman gradient policy optimization framework for reinforcement learning based on Bregman divergences and momentum techniques. Specifically, we propose a Bregman gradient policy optimization (BGPO) algorithm based on the basic momentum technique and mirror descent iteration. Meanwhile, we further propose an accelerated Bregman gradient policy optimization (VR-BGPO) algorithm based on the variance reduced technique. Moreover, we provide a convergence analysis framework for our Bregman gradient policy optimization under the nonconvex setting. We prove that our BGPO achieves a sample complexity of O(epsilon4) for finding epsilon-stationary policy only requiring one trajectory at each iteration, and our VR-BGPO reaches the best known sample complexity of O(epsilon3), which also only requires one trajectory at each iteration. In particular, by using different Bregman divergences, our BGPO framework unifies many existing policy optimization algorithms such as the existing (variance reduced) policy gradient algorithms such as natural policy gradient algorithm. Extensive experimental results on multiple reinforcement learning tasks demonstrate the efficiency of our new algorithms.




Has companion code repository: https://github.com/gaosh/bgpo








This page was built for publication: Bregman Gradient Policy Optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6371014)