Gradient Starvation: A Learning Proclivity in Neural Networks
From MaRDI portal
Publication:6354054
arXiv2011.09468MaRDI QIDQ6354054
Author name not available (Why is that?)
Publication date: 18 November 2020
Abstract: We identify and formalize a fundamental gradient descent phenomenon resulting in a learning proclivity in over-parameterized neural networks. Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task, despite the presence of other predictive features that fail to be discovered. This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks. Using tools from Dynamical Systems theory, we identify simple properties of learning dynamics during gradient descent that lead to this imbalance, and prove that such a situation can be expected given certain statistical structure in training data. Based on our proposed formalism, we develop guarantees for a novel regularization method aimed at decoupling feature learning dynamics, improving accuracy and robustness in cases hindered by gradient starvation. We illustrate our findings with simple and real-world out-of-distribution (OOD) generalization experiments.
Has companion code repository: https://github.com/facebookresearch/DomainBed
This page was built for publication: Gradient Starvation: A Learning Proclivity in Neural Networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6354054)