Deep PQR: Solving Inverse Reinforcement Learning using Anchor Actions
From MaRDI portal
Publication:6345120
arXiv2007.07443MaRDI QIDQ6345120
Author name not available (Why is that?)
Publication date: 14 July 2020
Abstract: We propose a reward function estimation framework for inverse reinforcement learning with deep energy-based policies. We name our method PQR, as it sequentially estimates the Policy, the -function, and the Reward function by deep learning. PQR does not assume that the reward solely depends on the state, instead it allows for a dependency on the choice of action. Moreover, PQR allows for stochastic state transitions. To accomplish this, we assume the existence of one anchor action whose reward is known, typically the action of doing nothing, yielding no reward. We present both estimators and algorithms for the PQR method. When the environment transition is known, we prove that the PQR reward estimator uniquely recovers the true reward. With unknown transitions, we bound the estimation error of PQR. Finally, the performance of PQR is demonstrated by synthetic and real-world datasets.
Has companion code repository: https://github.com/gengsinong/samq
This page was built for publication: Deep PQR: Solving Inverse Reinforcement Learning using Anchor Actions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6345120)