Inexact bilevel stochastic gradient methods for constrained and unconstrained lower-level problems
From MaRDI portal
Publication:6379161
arXiv2110.00604MaRDI QIDQ6379161
Author name not available (Why is that?)
Publication date: 1 October 2021
Abstract: Two-level stochastic optimization formulations have become instrumental in a number of machine learning contexts such as continual learning, neural architecture search, adversarial learning, and hyperparameter tuning. Practical stochastic bilevel optimization problems become challenging in optimization or learning scenarios where the number of variables is high or there are constraints. In this paper, we introduce a bilevel stochastic gradient method for bilevel problems with lower-level constraints. We also present a comprehensive convergence theory that covers all inexact calculations of the adjoint gradient (also called hypergradient) and addresses both the lower-level unconstrained and constrained cases. To promote the use of bilevel optimization in large-scale learning, we introduce a practical bilevel stochastic gradient method (BSG-1) that does not require second-order derivatives and, in the lower-level unconstrained case, dismisses any system solves and matrix-vector products.
Has companion code repository: https://github.com/gdkent/bsg_methods_con_unc
This page was built for publication: Inexact bilevel stochastic gradient methods for constrained and unconstrained lower-level problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6379161)