Simple random search provides a competitive approach to reinforcement learning
From MaRDI portal
Publication:6299267
arXiv1803.07055MaRDI QIDQ6299267
Author name not available (Why is that?)
Publication date: 19 March 2018
Abstract: A common belief in model-free reinforcement learning is that methods based on random search in the parameter space of policies exhibit significantly worse sample complexity than those that explore the space of actions. We dispel such beliefs by introducing a random search method for training static, linear policies for continuous control problems, matching state-of-the-art sample efficiency on the benchmark MuJoCo locomotion tasks. Our method also finds a nearly optimal controller for a challenging instance of the Linear Quadratic Regulator, a classical problem in control theory, when the dynamics are not known. Computationally, our random search algorithm is at least 15 times more efficient than the fastest competing model-free methods on these benchmarks. We take advantage of this computational efficiency to evaluate the performance of our method over hundreds of random seeds and many different hyperparameter configurations for each benchmark task. Our simulations highlight a high variability in performance in these benchmark tasks, suggesting that commonly used estimations of sample efficiency do not adequately evaluate the performance of RL algorithms.
Has companion code repository: https://github.com/cpow-89/Augmented-Random-Search
This page was built for publication: Simple random search provides a competitive approach to reinforcement learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6299267)