Analysis and Comparison of Two-Level KFAC Methods for Training Deep Neural Networks

From MaRDI portal
Publication:6431609

arXiv2303.18083MaRDI QIDQ6431609

Author name not available (Why is that?)

Publication date: 31 March 2023

Abstract: As a second-order method, the Natural Gradient Descent (NGD) has the ability to accelerate training of neural networks. However, due to the prohibitive computational and memory costs of computing and inverting the Fisher Information Matrix (FIM), efficient approximations are necessary to make NGD scalable to Deep Neural Networks (DNNs). Many such approximations have been attempted. The most sophisticated of these is KFAC, which approximates the FIM as a block-diagonal matrix, where each block corresponds to a layer of the neural network. By doing so, KFAC ignores the interactions between different layers. In this work, we investigate the interest of restoring some low-frequency interactions between the layers by means of two-level methods. Inspired from domain decomposition, several two-level corrections to KFAC using different coarse spaces are proposed and assessed. The obtained results show that incorporating the layer interactions in this fashion does not really improve the performance of KFAC. This suggests that it is safe to discard the off-diagonal blocks of the FIM, since the block-diagonal approach is sufficiently robust, accurate and economical in computation time.




Has companion code repository: https://github.com/Abdoulaye-Koroko/natural-gradients








This page was built for publication: Analysis and Comparison of Two-Level KFAC Methods for Training Deep Neural Networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6431609)