Variance-reduced Clipping for Non-convex Optimization
From MaRDI portal
Publication:6428131
arXiv2303.00883MaRDI QIDQ6428131
Author name not available (Why is that?)
Publication date: 1 March 2023
Abstract: Gradient clipping is a standard training technique used in deep learning applications such as large-scale language modeling to mitigate exploding gradients. Recent experimental studies have demonstrated a fairly special behavior in the smoothness of the training objective along its trajectory when trained with gradient clipping. That is, the smoothness grows with the gradient norm. This is in clear contrast to the well-established assumption in folklore non-convex optimization, a.k.a. --smoothness, where the smoothness is assumed to be bounded by a constant globally. The recently introduced --smoothness is a more relaxed notion that captures such behavior in non-convex optimization. In particular, it has been shown that under this relaxed smoothness assumption, SGD with clipping requires stochastic gradient computations to find an --stationary solution. In this paper, we employ a variance reduction technique, namely SPIDER, and demonstrate that for a carefully designed learning rate, this complexity is improved to which is order-optimal. Our designed learning rate comprises the clipping technique to mitigate the growing smoothness. Moreover, when the objective function is the average of components, we improve the existing bound on the stochastic gradient complexity to , which is order-optimal as well. In addition to being theoretically optimal, SPIDER with our designed parameters demonstrates comparable empirical performance against variance-reduced methods such as SVRG and SARAH in several vision tasks.
Has companion code repository: https://github.com/haochuan-mit/varaince-reduced-clipping-for-non-convex-optimization
This page was built for publication: Variance-reduced Clipping for Non-convex Optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6428131)