Optimisation & Generalisation in Networks of Neurons
From MaRDI portal
Publication:6414364
arXiv2210.10101MaRDI QIDQ6414364
Author name not available (Why is that?)
Publication date: 18 October 2022
Abstract: The goal of this thesis is to develop the optimisation and generalisation theoretic foundations of learning in artificial neural networks. On optimisation, a new theoretical framework is proposed for deriving architecture-dependent first-order optimisation algorithms. The approach works by combining a "functional majorisation" of the loss function with "architectural perturbation bounds" that encode an explicit dependence on neural architecture. The framework yields optimisation methods that transfer hyperparameters across learning problems. On generalisation, a new correspondence is proposed between ensembles of networks and individual networks. It is argued that, as network width and normalised margin are taken large, the space of networks that interpolate a particular training set concentrates on an aggregated Bayesian method known as a "Bayes point machine". This correspondence provides a route for transferring PAC-Bayesian generalisation theorems over to individual networks. More broadly, the correspondence presents a fresh perspective on the role of regularisation in networks with vastly more parameters than data.
Has companion code repository: https://github.com/jxbz/agd
This page was built for publication: Optimisation & Generalisation in Networks of Neurons
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6414364)