Learning to branch with Tree MDPs
From MaRDI portal
Publication:6399891
arXiv2205.11107MaRDI QIDQ6399891
Author name not available (Why is that?)
Publication date: 23 May 2022
Abstract: State-of-the-art Mixed Integer Linear Program (MILP) solvers combine systematic tree search with a plethora of hard-coded heuristics, such as the branching rule. The idea of learning branching rules from data has received increasing attention recently, and promising results have been obtained by learning fast approximations of the strong branching expert. In this work, we instead propose to learn branching rules from scratch via Reinforcement Learning (RL). We revisit the work of Etheve et al. (2020) and propose tree Markov Decision Processes, or tree MDPs, a generalization of temporal MDPs that provides a more suitable framework for learning to branch. We derive a tree policy gradient theorem, which exhibits a better credit assignment compared to its temporal counterpart. We demonstrate through computational experiments that tree MDPs improve the learning convergence, and offer a promising framework for tackling the learning-to-branch problem in MILPs.
Has companion code repository: https://github.com/lascavana/rl2branch
This page was built for publication: Learning to branch with Tree MDPs
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6399891)