Accelerating Quadratic Optimization with Reinforcement Learning
From MaRDI portal
Publication:6373519
arXiv2107.10847MaRDI QIDQ6373519
Ion Stoica, Francesco Borrelli, Ken Goldberg, Bartolomeo Stellato, Goran Banjac, Jeffrey Ichnowski, Joseph E. Gonzalez, Michael Luo, Paras Jain
Publication date: 22 July 2021
Abstract: First-order methods for quadratic optimization such as OSQP are widely used for large-scale machine learning and embedded optimal control, where many related problems must be rapidly solved. These methods face two persistent challenges: manual hyperparameter tuning and convergence time to high-accuracy solutions. To address these, we explore how Reinforcement Learning (RL) can learn a policy to tune parameters to accelerate convergence. In experiments with well-known QP benchmarks we find that our RL policy, RLQP, significantly outperforms state-of-the-art QP solvers by up to 3x. RLQP generalizes surprisingly well to previously unseen problems with varying dimension and structure from different applications, including the QPLIB, Netlib LP and Maros-Meszaros problems. Code for RLQP is available at https://github.com/berkeleyautomation/rlqp.
Has companion code repository: https://github.com/berkeleyautomation/rlqp
This page was built for publication: Accelerating Quadratic Optimization with Reinforcement Learning