Identifiability and Generalizability in Constrained Inverse Reinforcement Learning

From MaRDI portal
Publication:6438833

arXiv2306.00629MaRDI QIDQ6438833

Author name not available (Why is that?)

Publication date: 1 June 2023

Abstract: Two main challenges in Reinforcement Learning (RL) are designing appropriate reward functions and ensuring the safety of the learned policy. To address these challenges, we present a theoretical framework for Inverse Reinforcement Learning (IRL) in constrained Markov decision processes. From a convex-analytic perspective, we extend prior results on reward identifiability and generalizability to both the constrained setting and a more general class of regularizations. In particular, we show that identifiability up to potential shaping (Cao et al., 2021) is a consequence of entropy regularization and may generally no longer hold for other regularizations or in the presence of safety constraints. We also show that to ensure generalizability to new transition laws and constraints, the true reward must be identified up to a constant. Additionally, we derive a finite sample guarantee for the suboptimality of the learned rewards, and validate our results in a gridworld environment.




Has companion code repository: https://github.com/andrschl/cirl








This page was built for publication: Identifiability and Generalizability in Constrained Inverse Reinforcement Learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6438833)