Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: Global Convergence Guarantees and Feature Learning

From MaRDI portal
Publication:6425203

arXiv2302.01002MaRDI QIDQ6425203

Author name not available (Why is that?)

Publication date: 2 February 2023

Abstract: We consider the optimisation of large and shallow neural networks via gradient flow, where the output of each hidden node is scaled by some positive parameter. We focus on the case where the node scalings are non-identical, differing from the classical Neural Tangent Kernel (NTK) parameterisation. We prove that, for large neural networks, with high probability, gradient flow converges to a global minimum AND can learn features, unlike in the NTK regime. We also provide experiments on synthetic and real-world datasets illustrating our theoretical results and showing the benefit of such scaling in terms of pruning and transfer learning.




Has companion code repository: https://github.com/anomdoubleblind/asymmetrical_scaling








This page was built for publication: Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: Global Convergence Guarantees and Feature Learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6425203)