ORL: Reinforcement Learning Benchmarks for Online Stochastic Optimization Problems

From MaRDI portal
Publication:6329789

arXiv1911.10641MaRDI QIDQ6329789

Alvaro Maggiar, Andreas Damianou, Arpit Jain, Balakrishnan Narayanaswamy, Bharathan Balaji, Chun Ye, Enes Bilgin, Jordan Bell-Masterson, Pablo Moreno Garcia, Runfei Luo

Publication date: 24 November 2019

Abstract: Reinforcement Learning (RL) has achieved state-of-the-art results in domains such as robotics and games. We build on this previous work by applying RL algorithms to a selection of canonical online stochastic optimization problems with a range of practical applications: Bin Packing, Newsvendor, and Vehicle Routing. While there is a nascent literature that applies RL to these problems, there are no commonly accepted benchmarks which can be used to compare proposed approaches rigorously in terms of performance, scale, or generalizability. This paper aims to fill that gap. For each problem we apply both standard approaches as well as newer RL algorithms and analyze results. In each case, the performance of the trained RL policy is competitive with or superior to the corresponding baselines, while not requiring much in the way of domain knowledge. This highlights the potential of RL in real-world dynamic resource allocation problems.




Has companion code repository: https://github.com/hubbs5/or-gym








This page was built for publication: ORL: Reinforcement Learning Benchmarks for Online Stochastic Optimization Problems