Geometric compression of invariant manifolds in neural networks
From MaRDI portal
Publication:3382321
DOI10.1088/1742-5468/ABF1F3OpenAlexW3202748241WikidataQ111520781 ScholiaQ111520781MaRDI QIDQ3382321
Mario Geiger, Kevin Tyloo, Matthieu Wyart, Leonardo Petrini, Jonas Paccolat
Publication date: 21 September 2021
Published in: Journal of Statistical Mechanics: Theory and Experiment (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2007.11471
Related Items (1)
Uses Software
Cites Work
- MnasNet
- A mean field view of the landscape of two-layer neural networks
- Learning Theory and Kernel Machines
- The space of interactions in neural network models
- On the information bottleneck theory of deep learning
- Scaling description of generalization with number of parameters in deep learning
- Disentangling feature and lazy training in deep neural networks
- A jamming transition from under- to over-parametrization affects generalization in deep learning
- Training Neural Networks as Learning Data-adaptive Kernels: Provable Representation and Approximation Benefits
This page was built for publication: Geometric compression of invariant manifolds in neural networks