Implicit Regularization of Discrete Gradient Dynamics in Linear Neural Networks

From MaRDI portal
Publication:6318002

arXiv1904.13262MaRDI QIDQ6318002

Gauthier Gidel, Francis Bach, Simon Lacoste-Julien

Publication date: 30 April 2019

Abstract: When optimizing over-parameterized models, such as deep neural networks, a large set of parameters can achieve zero training error. In such cases, the choice of the optimization algorithm and its respective hyper-parameters introduces biases that will lead to convergence to specific minimizers of the objective. Consequently, this choice can be considered as an implicit regularization for the training of over-parametrized models. In this work, we push this idea further by studying the discrete gradient dynamics of the training of a two-layer linear network with the least-squares loss. Using a time rescaling, we show that, with a vanishing initialization and a small enough step size, this dynamics sequentially learns the solutions of a reduced-rank regression with a gradually increasing rank.




Has companion code repository: https://github.com/GauthierGidel/Implicit-Regularization-of-Discrete-Gradient-Dynamics-in-Linear-Neural-Networks








This page was built for publication: Implicit Regularization of Discrete Gradient Dynamics in Linear Neural Networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6318002)