Anchor-Changing Regularized Natural Policy Gradient for Multi-Objective Reinforcement Learning

From MaRDI portal
Publication:6401740

arXiv2206.05357MaRDI QIDQ6401740

Author name not available (Why is that?)

Publication date: 10 June 2022

Abstract: We study policy optimization for Markov decision processes (MDPs) with multiple reward value functions, which are to be jointly optimized according to given criteria such as proportional fairness (smooth concave scalarization), hard constraints (constrained MDP), and max-min trade-off. We propose an Anchor-changing Regularized Natural Policy Gradient (ARNPG) framework, which can systematically incorporate ideas from well-performing first-order methods into the design of policy optimization algorithms for multi-objective MDP problems. Theoretically, the designed algorithms based on the ARNPG framework achieve ildeO(1/T) global convergence with exact gradients. Empirically, the ARNPG-guided algorithms also demonstrate superior performance compared to some existing policy gradient-based approaches in both exact gradients and sample-based scenarios.




Has companion code repository: https://github.com/tliu1997/arnpg-morl








This page was built for publication: Anchor-Changing Regularized Natural Policy Gradient for Multi-Objective Reinforcement Learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6401740)