Revisiting State Augmentation methods for Reinforcement Learning with Stochastic Delays

From MaRDI portal
Publication:6375414

arXiv2108.07555MaRDI QIDQ6375414

Author name not available (Why is that?)

Publication date: 17 August 2021

Abstract: Several real-world scenarios, such as remote control and sensing, are comprised of action and observation delays. The presence of delays degrades the performance of reinforcement learning (RL) algorithms, often to such an extent that algorithms fail to learn anything substantial. This paper formally describes the notion of Markov Decision Processes (MDPs) with stochastic delays and shows that delayed MDPs can be transformed into equivalent standard MDPs (without delays) with significantly simplified cost structure. We employ this equivalence to derive a model-free Delay-Resolved RL framework and show that even a simple RL algorithm built upon this framework achieves near-optimal rewards in environments with stochastic delays in actions and observations. The delay-resolved deep Q-network (DRDQN) algorithm is bench-marked on a variety of environments comprising of multi-step and stochastic delays and results in better performance, both in terms of achieving near-optimal rewards and minimizing the computational overhead thereof, with respect to the currently established algorithms.




Has companion code repository: https://github.com/baranwa2/delayresolvedrl








This page was built for publication: Revisiting State Augmentation methods for Reinforcement Learning with Stochastic Delays

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6375414)