FastDOG: Fast Discrete Optimization on GPU

From MaRDI portal
Publication:6383469

arXiv2111.10270MaRDI QIDQ6383469

Author name not available (Why is that?)

Publication date: 19 November 2021

Abstract: We present a massively parallel Lagrange decomposition method for solving 0--1 integer linear programs occurring in structured prediction. We propose a new iterative update scheme for solving the Lagrangean dual and a perturbation technique for decoding primal solutions. For representing subproblems we follow Lange et al. (2021) and use binary decision diagrams (BDDs). Our primal and dual algorithms require little synchronization between subproblems and optimization over BDDs needs only elementary operations without complicated control flow. This allows us to exploit the parallelism offered by GPUs for all components of our method. We present experimental results on combinatorial problems from MAP inference for Markov Random Fields, quadratic assignment and cell tracking for developmental biology. Our highly parallel GPU implementation improves upon the running times of the algorithms from Lange et al. (2021) by up to an order of magnitude. In particular, we come close to or outperform some state-of-the-art specialized heuristics while being problem agnostic. Our implementation is available at https://github.com/LPMP/BDD.




Has companion code repository: https://github.com/lpmp/bdd








This page was built for publication: FastDOG: Fast Discrete Optimization on GPU

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6383469)