Phase diagram of stochastic gradient descent in high-dimensional two-layer neural networks
From MaRDI portal
Publication:6611445
DOI10.1088/1742-5468/ad01b1MaRDI QIDQ6611445
Lenka Zdeborová, Florent Krzakala, Rodrigo Veiga, Bruno Loureiro, Ludovic Stephan
Publication date: 26 September 2024
Published in: Journal of Statistical Mechanics: Theory and Experiment (Search for Journal in Brave)
Cites Work
- Unnamed Item
- Algorithmic thresholds for tensor PCA
- Statistical mechanics of online learning of drifting concepts: A variational approach
- A Taylor expansion of the square root matrix function
- Mean field analysis of neural networks: a central limit theorem
- Optimal generalization in perceptions
- On-line backpropagation in two-layered neural networks
- Learning by on-line gradient descent
- On-line learning in the committee machine
- A mean field view of the landscape of two-layer neural networks
- Trainability and Accuracy of Artificial Neural Networks: An Interacting Particle System Approach
- Phase retrieval via randomized Kaczmarz: theoretical guarantees
- The committee machine: computational to statistical gaps in learning a two-layers neural network
- Universality Laws for High-Dimensional Learning With Random Features
This page was built for publication: Phase diagram of stochastic gradient descent in high-dimensional two-layer neural networks