Redundant representations help generalization in wide neural networks
From MaRDI portal
Publication:6611451
DOI10.1088/1742-5468/ACEB4FMaRDI QIDQ6611451
Alessandro Laio, Sebastian Goldt, Aldo Glielmo, Diego Doimo
Publication date: 26 September 2024
Published in: Journal of Statistical Mechanics: Theory and Experiment (Search for Journal in Brave)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- High-dimensional dynamics of generalization error in neural networks
- Surprises in high-dimensional ridgeless least squares interpolation
- Mean field analysis of neural networks: a central limit theorem
- A mean field view of the landscape of two-layer neural networks
- Deep double descent: where bigger models and more data hurt*
- The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve
- Benign overfitting in linear regression
- Reconciling modern machine-learning practice and the classical bias–variance trade-off
- Scaling description of generalization with number of parameters in deep learning
- Disentangling feature and lazy training in deep neural networks
- Dynamics of stochastic gradient descent for two-layer neural networks in the teacher–student setup*
- A jamming transition from under- to over-parametrization affects generalization in deep learning
- The elements of statistical learning. Data mining, inference, and prediction
This page was built for publication: Redundant representations help generalization in wide neural networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6611451)