ADMM for Efficient Deep Learning with Global Convergence
From MaRDI portal
Publication:6319737
arXiv1905.13611MaRDI QIDQ6319737
Author name not available (Why is that?)
Publication date: 31 May 2019
Abstract: Alternating Direction Method of Multipliers (ADMM) has been used successfully in many conventional machine learning applications and is considered to be a useful alternative to Stochastic Gradient Descent (SGD) as a deep learning optimizer. However, as an emerging domain, several challenges remain, including 1) The lack of global convergence guarantees, 2) Slow convergence towards solutions, and 3) Cubic time complexity with regard to feature dimensions. In this paper, we propose a novel optimization framework for deep learning via ADMM (dlADMM) to address these challenges simultaneously. The parameters in each layer are updated backward and then forward so that the parameter information in each layer is exchanged efficiently. The time complexity is reduced from cubic to quadratic in (latent) feature dimensions via a dedicated algorithm design for subproblems that enhances them utilizing iterative quadratic approximations and backtracking. Finally, we provide the first proof of global convergence for an ADMM-based method (dlADMM) in a deep neural network problem under mild conditions. Experiments on benchmark datasets demonstrated that our proposed dlADMM algorithm outperforms most of the comparison methods.
Has companion code repository: https://github.com/xianggebenben/dlADMM
This page was built for publication: ADMM for Efficient Deep Learning with Global Convergence
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6319737)