OptNet: Differentiable Optimization as a Layer in Neural Networks
From MaRDI portal
Publication:6283778
arXiv1703.00443MaRDI QIDQ6283778
Author name not available (Why is that?)
Publication date: 1 March 2017
Abstract: This paper presents OptNet, a network architecture that integrates optimization problems (here, specifically in the form of quadratic programs) as individual layers in larger end-to-end trainable deep networks. These layers encode constraints and complex dependencies between the hidden states that traditional convolutional and fully-connected layers often cannot capture. We explore the foundations for such an architecture: we show how techniques from sensitivity analysis, bilevel optimization, and implicit differentiation can be used to exactly differentiate through these layers and with respect to layer parameters; we develop a highly efficient solver for these layers that exploits fast GPU-based batch solves within a primal-dual interior point method, and which provides backpropagation gradients with virtually no additional cost on top of the solve; and we highlight the application of these approaches in several problems. In one notable example, the method is learns to play mini-Sudoku (4x4) given just input and output games, with no a-priori information about the rules of the game; this highlights the ability of OptNet to learn hard constraints better than other neural architectures.
Has companion code repository: https://github.com/locuslab/e2e-model-learning
This page was built for publication: OptNet: Differentiable Optimization as a Layer in Neural Networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6283778)