Provable Generalization of SGD-trained Neural Networks of Any Width in the Presence of Adversarial Label Noise
From MaRDI portal
Publication:6357568
arXiv2101.01152MaRDI QIDQ6357568
Yuan Cao, Quanquan Gu, Spencer Frei
Publication date: 4 January 2021
Abstract: We consider a one-hidden-layer leaky ReLU network of arbitrary width trained by stochastic gradient descent (SGD) following an arbitrary initialization. We prove that SGD produces neural networks that have classification accuracy competitive with that of the best halfspace over the distribution for a broad class of distributions that includes log-concave isotropic and hard margin distributions. Equivalently, such networks can generalize when the data distribution is linearly separable but corrupted with adversarial label noise, despite the capacity to overfit. To the best of our knowledge, this is the first work to show that overparameterized neural networks trained by SGD can generalize when the data is corrupted with adversarial label noise.
Has companion code repository: https://github.com/spencerfrei/nn_generalization_agnostic_noise
This page was built for publication: Provable Generalization of SGD-trained Neural Networks of Any Width in the Presence of Adversarial Label Noise
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6357568)