The Uncertainty Bellman Equation and Exploration

From MaRDI portal
Publication:6291375

arXiv1709.05380MaRDI QIDQ6291375

Author name not available (Why is that?)

Publication date: 15 September 2017

Abstract: We consider the exploration/exploitation problem in reinforcement learning. For exploitation, it is well known that the Bellman equation connects the value at any time-step to the expected value at subsequent time-steps. In this paper we consider a similar extit{uncertainty} Bellman equation (UBE), which connects the uncertainty at any time-step to the expected uncertainties at subsequent time-steps, thereby extending the potential exploratory benefit of a policy beyond individual time-steps. We prove that the unique fixed point of the UBE yields an upper bound on the variance of the posterior distribution of the Q-values induced by any policy. This bound can be much tighter than traditional count-based bonuses that compound standard deviation rather than variance. Importantly, and unlike several existing approaches to optimism, this method scales naturally to large systems with complex generalization. Substituting our UBE-exploration strategy for epsilon-greedy improves DQN performance on 51 out of 57 games in the Atari suite.




Has companion code repository: https://github.com/stratismarkou/sample-efficient-bayesian-rl








This page was built for publication: The Uncertainty Bellman Equation and Exploration

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6291375)