Align, then memorise: the dynamics of learning with feedback alignment*
From MaRDI portal
Publication:5055410
DOI10.1088/1742-5468/ac9826OpenAlexW3168722425MaRDI QIDQ5055410
Ruben Ohana, Stéphane D'Ascoli, Maria Refinetti, Sebastian Goldt
Publication date: 13 December 2022
Published in: Journal of Statistical Mechanics: Theory and Experiment (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1088/1742-5468/ac9826
Cites Work
- High-dimensional dynamics of generalization error in neural networks
- Mean field analysis of neural networks: a central limit theorem
- Statistical Mechanics of Learning
- Generalization in a linear perceptron in the presence of noise
- Theoretical Insights Into the Optimization Landscape of Over-Parameterized Shallow Neural Networks
- Learning by on-line gradient descent
- A mean field view of the landscape of two-layer neural networks
- Mean-field inference methods for neural networks
- Learning representations by back-propagating errors
- Dynamics of stochastic gradient descent for two-layer neural networks in the teacher–student setup*
This page was built for publication: Align, then memorise: the dynamics of learning with feedback alignment*