Provably Efficient Exploration for Reinforcement Learning Using Unsupervised Learning

From MaRDI portal
Publication:6336792

arXiv2003.06898MaRDI QIDQ6336792

Simon S. Du, Lin F. Yang, Fei Feng, Ruosong Wang, Wotao Yin

Publication date: 15 March 2020

Abstract: Motivated by the prevailing paradigm of using unsupervised learning for efficient exploration in reinforcement learning (RL) problems [tang2017exploration,bellemare2016unifying], we investigate when this paradigm is provably efficient. We study episodic Markov decision processes with rich observations generated from a small number of latent states. We present a general algorithmic framework that is built upon two components: an unsupervised learning algorithm and a no-regret tabular RL algorithm. Theoretically, we prove that as long as the unsupervised learning algorithm enjoys a polynomial sample complexity guarantee, we can find a near-optimal policy with sample complexity polynomial in the number of latent states, which is significantly smaller than the number of observations. Empirically, we instantiate our framework on a class of hard exploration problems to demonstrate the practicality of our theory.




Has companion code repository: https://github.com/FlorenceFeng/StateDecoding








This page was built for publication: Provably Efficient Exploration for Reinforcement Learning Using Unsupervised Learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6336792)