Gated Linear Networks
From MaRDI portal
Publication:6326495
arXiv1910.01526MaRDI QIDQ6326495
Author name not available (Why is that?)
Publication date: 30 September 2019
Abstract: This paper presents a new family of backpropagation-free neural architectures, Gated Linear Networks (GLNs). What distinguishes GLNs from contemporary neural networks is the distributed and local nature of their credit assignment mechanism; each neuron directly predicts the target, forgoing the ability to learn feature representations in favor of rapid online learning. Individual neurons can model nonlinear functions via the use of data-dependent gating in conjunction with online convex optimization. We show that this architecture gives rise to universal learning capabilities in the limit, with effective model capacity increasing as a function of network size in a manner comparable with deep ReLU networks. Furthermore, we demonstrate that the GLN learning mechanism possesses extraordinary resilience to catastrophic forgetting, performing comparably to a MLP with dropout and Elastic Weight Consolidation on standard benchmarks. These desirable theoretical and empirical properties position GLNs as a complementary technique to contemporary offline deep learning methods.
Has companion code repository: https://github.com/aiwabdn/pygln
This page was built for publication: Gated Linear Networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6326495)