Generalization Guarantees for Imitation Learning

From MaRDI portal
Publication:6346518

arXiv2008.01913MaRDI QIDQ6346518

Author name not available (Why is that?)

Publication date: 4 August 2020

Abstract: Control policies from imitation learning can often fail to generalize to novel environments due to imperfect demonstrations or the inability of imitation learning algorithms to accurately infer the expert's policies. In this paper, we present rigorous generalization guarantees for imitation learning by leveraging the Probably Approximately Correct (PAC)-Bayes framework to provide upper bounds on the expected cost of policies in novel environments. We propose a two-stage training method where a latent policy distribution is first embedded with multi-modal expert behavior using a conditional variational autoencoder, and then "fine-tuned" in new training environments to explicitly optimize the generalization bound. We demonstrate strong generalization bounds and their tightness relative to empirical performance in simulation for (i) grasping diverse mugs, (ii) planar pushing with visual feedback, and (iii) vision-based indoor navigation, as well as through hardware experiments for the two manipulation tasks.




Has companion code repository: https://github.com/irom-lab/dragen








This page was built for publication: Generalization Guarantees for Imitation Learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6346518)