Theory of the Frequency Principle for General Deep Neural Networks
From MaRDI portal
Publication:6320863
arXiv1906.09235MaRDI QIDQ6320863
Author name not available (Why is that?)
Publication date: 21 June 2019
Abstract: Along with fruitful applications of Deep Neural Networks (DNNs) to realistic problems, recently, some empirical studies of DNNs reported a universal phenomenon of Frequency Principle (F-Principle): a DNN tends to learn a target function from low to high frequencies during the training. The F-Principle has been very useful in providing both qualitative and quantitative understandings of DNNs. In this paper, we rigorously investigate the F-Principle for the training dynamics of a general DNN at three stages: initial stage, intermediate stage, and final stage. For each stage, a theorem is provided in terms of proper quantities characterizing the F-Principle. Our results are general in the sense that they work for multilayer networks with general activation functions, population densities of data, and a large class of loss functions. Our work lays a theoretical foundation of the F-Principle for a better understanding of the training process of DNNs.
Has companion code repository: https://github.com/xuzhiqin1990/F-Principle
No records found.
This page was built for publication: Theory of the Frequency Principle for General Deep Neural Networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6320863)