Deprecated: $wgMWOAuthSharedUserIDs=false is deprecated, set $wgMWOAuthSharedUserIDs=true, $wgMWOAuthSharedUserSource='local' instead [Called from MediaWiki\HookContainer\HookContainer::run in /var/www/html/w/includes/HookContainer/HookContainer.php at line 135] in /var/www/html/w/includes/Debug/MWDebug.php on line 372
Implicit differentiation for fast hyperparameter selection in non-smooth convex learning - MaRDI portal

Implicit differentiation for fast hyperparameter selection in non-smooth convex learning

From MaRDI portal
Publication:6366825

arXiv2105.01637MaRDI QIDQ6366825

Alexandre Gramfort, Quentin Bertrand, Mathurin Massias, Mathieu Blondel, Quentin Klopfenstein, Samuel Vaiter, Joseph Salmon

Publication date: 4 May 2021

Abstract: Finding the optimal hyperparameters of a model can be cast as a bilevel optimization problem, typically solved using zero-order techniques. In this work we study first-order methods when the inner optimization problem is convex but non-smooth. We show that the forward-mode differentiation of proximal gradient descent and proximal coordinate descent yield sequences of Jacobians converging toward the exact Jacobian. Using implicit differentiation, we show it is possible to leverage the non-smoothness of the inner problem to speed up the computation. Finally, we provide a bound on the error made on the hypergradient when the inner optimization problem is solved approximately. Results on regression and classification problems reveal computational benefits for hyperparameter optimization, especially when multiple hyperparameters are required.




Has companion code repository: https://github.com/QB3/sparse-ho








This page was built for publication: Implicit differentiation for fast hyperparameter selection in non-smooth convex learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6366825)