Structured Sparsity Inducing Adaptive Optimizers for Deep Learning
From MaRDI portal
Publication:6360064
arXiv2102.03869MaRDI QIDQ6360064
Author name not available (Why is that?)
Publication date: 7 February 2021
Abstract: The parameters of a neural network are naturally organized in groups, some of which might not contribute to its overall performance. To prune out unimportant groups of parameters, we can include some non-differentiable penalty to the objective function, and minimize it using proximal gradient methods. In this paper, we derive the weighted proximal operator, which is a necessary component of these proximal methods, of two structured sparsity inducing penalties. Moreover, they can be approximated efficiently with a numerical solver, and despite this approximation, we prove that existing convergence guarantees are preserved when these operators are integrated as part of a generic adaptive proximal method. Finally, we show that this adaptive method, together with the weighted proximal operators derived here, is indeed capable of finding solutions with structure in their sparsity patterns, on representative examples from computer vision and natural language processing.
Has companion code repository: https://github.com/ricupa/less-is-more-towards-parsimonious-multi-task-models-using-structured-sparsity
This page was built for publication: Structured Sparsity Inducing Adaptive Optimizers for Deep Learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6360064)