A Stochastic Proximal Method for Nonsmooth Regularized Finite Sum Optimization
From MaRDI portal
Publication:6401982
arXiv2206.06531MaRDI QIDQ6401982
Author name not available (Why is that?)
Publication date: 13 June 2022
Abstract: We consider the problem of training a deep neural network with nonsmooth regularization to retrieve a sparse and efficient sub-structure. Our regularizer is only assumed to be lower semi-continuous and prox-bounded. We combine an adaptive quadratic regularization approach with proximal stochastic gradient principles to derive a new solver, called SR2, whose convergence and worst-case complexity are established without knowledge or approximation of the gradient's Lipschitz constant. We formulate a stopping criteria that ensures an appropriate first-order stationarity measure converges to zero under certain conditions. We establish a worst-case iteration complexity of that matches those of related methods like ProxGEN, where the learning rate is assumed to be related to the Lipschitz constant. Our experiments on network instances trained on CIFAR-10 and CIFAR-100 with and regularizations show that SR2 consistently achieves higher sparsity and accuracy than related methods such as ProxGEN and ProxSGD.
Has companion code repository: https://github.com/dounialakhmiri/sr2
This page was built for publication: A Stochastic Proximal Method for Nonsmooth Regularized Finite Sum Optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6401982)