Overparameterized ReLU Neural Networks Learn the Simplest Models: Neural Isometry and Exact Recovery
From MaRDI portal
Publication:6412440
arXiv2209.15265MaRDI QIDQ6412440
Author name not available (Why is that?)
Publication date: 30 September 2022
Abstract: The practice of deep learning has shown that neural networks generalize remarkably well even with an extreme number of learned parameters. This appears to contradict traditional statistical wisdom, in which a trade-off between model complexity and fit to the data is essential. We aim to address this discrepancy by adopting a convex optimization and sparse recovery perspective. We consider the training and generalization properties of two-layer ReLU networks with standard weight decay regularization. Under certain regularity assumptions on the data, we show that ReLU networks with an arbitrary number of parameters learn only simple models that explain the data. This is analogous to the recovery of the sparsest linear model in compressed sensing. For ReLU networks and their variants with skip connections or normalization layers, we present isometry conditions that ensure the exact recovery of planted neurons. For randomly generated data, we show the existence of a phase transition in recovering planted neural network models, which is easy to describe: whenever the ratio between the number of samples and the dimension exceeds a numerical threshold, the recovery succeeds with high probability; otherwise, it fails with high probability. Surprisingly, ReLU networks learn simple and sparse models that generalize well even when the labels are noisy . The phase transition phenomenon is confirmed through numerical experiments.
Has companion code repository: https://github.com/pilancilab/neural-recovery
This page was built for publication: Overparameterized ReLU Neural Networks Learn the Simplest Models: Neural Isometry and Exact Recovery
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6412440)