Avoiding Spurious Local Minima in Deep Quadratic Networks

From MaRDI portal
Publication:6332121

arXiv2001.00098MaRDI QIDQ6332121

Author name not available (Why is that?)

Publication date: 31 December 2019

Abstract: Despite their practical success, a theoretical understanding of the loss landscape of neural networks has proven challenging due to the high-dimensional, non-convex, and highly nonlinear structure of such models. In this paper, we characterize the training landscape of the mean squared error loss for neural networks with quadratic activation functions. We prove existence of spurious local minima and saddle points which can be escaped easily with probability one when the number of neurons is greater than or equal to the input dimension and the norm of the training samples is used as a regressor. We prove that deep overparameterized neural networks with quadratic activations benefit from similar nice landscape properties. Our theoretical results are independent of data distribution and fill the existing gap in theory for two-layer quadratic neural networks. Finally, we empirically demonstrate convergence to a global minimum for these problems.




Has companion code repository: https://github.com/druckmann-lab/QuadraticNets








This page was built for publication: Avoiding Spurious Local Minima in Deep Quadratic Networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6332121)