Learning compositional functions via multiplicative weight updates
From MaRDI portal
Publication:6343757
arXiv2006.14560MaRDI QIDQ6343757
Anima Anandkumar, Jeremy Bernstein, Yisong Yue, Ming-Yu Liu, Jiawei Zhao, Markus Meister
Publication date: 25 June 2020
Abstract: Compositionality is a basic structural feature of both biological and artificial neural networks. Learning compositional functions via gradient descent incurs well known problems like vanishing and exploding gradients, making careful learning rate tuning essential for real-world applications. This paper proves that multiplicative weight updates satisfy a descent lemma tailored to compositional functions. Based on this lemma, we derive Madam -- a multiplicative version of the Adam optimiser -- and show that it can train state of the art neural network architectures without learning rate tuning. We further show that Madam is easily adapted to train natively compressed neural networks by representing their weights in a logarithmic number system. We conclude by drawing connections between multiplicative weight updates and recent findings about synapses in biology.
Has companion code repository: https://github.com/jxbz/madam
This page was built for publication: Learning compositional functions via multiplicative weight updates
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6343757)