Expressing linear equality constraints in feedforward neural networks
From MaRDI portal
Publication:6416568
arXiv2211.04395MaRDI QIDQ6416568
Author name not available (Why is that?)
Publication date: 8 November 2022
Abstract: We seek to impose linear, equality constraints in feedforward neural networks. As top layer predictors are usually nonlinear, this is a difficult task if we seek to deploy standard convex optimization methods and strong duality. To overcome this, we introduce a new saddle-point Lagrangian with auxiliary predictor variables on which constraints are imposed. Elimination of the auxiliary variables leads to a dual minimization problem on the Lagrange multipliers introduced to satisfy the linear constraints. This minimization problem is combined with the standard learning problem on the weight matrices. From this theoretical line of development, we obtain the surprising interpretation of Lagrange parameters as additional, penultimate layer hidden units with fixed weights stemming from the constraints. Consequently, standard minimization approaches can be used despite the inclusion of Lagrange parameters -- a very satisfying, albeit unexpected, discovery. Examples ranging from multi-label classification to constrained autoencoders are envisaged in the future. The code has been made available at https://github.com/anandrajan0/smartalec
Has companion code repository: https://github.com/anandrajan0/smartalec
This page was built for publication: Expressing linear equality constraints in feedforward neural networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6416568)