Appearance of Random Matrix Theory in Deep Learning
From MaRDI portal
Publication:6360488
arXiv2102.06740MaRDI QIDQ6360488
Author name not available (Why is that?)
Publication date: 12 February 2021
Abstract: We investigate the local spectral statistics of the loss surface Hessians of artificial neural networks, where we discover excellent agreement with Gaussian Orthogonal Ensemble statistics across several network architectures and datasets. These results shed new light on the applicability of Random Matrix Theory to modelling neural networks and suggest a previously unrecognised role for it in the study of loss surfaces in deep learning. Inspired by these observations, we propose a novel model for the true loss surfaces of neural networks, consistent with our observations, which allows for Hessian spectral densities with rank degeneracy and outliers, extensively observed in practice, and predicts a growing independence of loss gradients as a function of distance in weight-space. We further investigate the importance of the true loss surface in neural networks and find, in contrast to previous work, that the exponential hardness of locating the global minimum has practical consequences for achieving state of the art performance.
Has companion code repository: https://github.com/npbaskerville/dnn-rmt-spacings
This page was built for publication: Appearance of Random Matrix Theory in Deep Learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6360488)