Refining the Structure of Neural Networks Using Matrix Conditioning
From MaRDI portal
Publication:6323289
arXiv1908.02400MaRDI QIDQ6323289
Author name not available (Why is that?)
Publication date: 6 August 2019
Abstract: Deep learning models have proven to be exceptionally useful in performing many machine learning tasks. However, for each new dataset, choosing an effective size and structure of the model can be a time-consuming process of trial and error. While a small network with few neurons might not be able to capture the intricacies of a given task, having too many neurons can lead to overfitting and poor generalization. Here, we propose a practical method that employs matrix conditioning to automatically design the structure of layers of a feed-forward network, by first adjusting the proportion of neurons among the layers of a network and then scaling the size of network up or down. Results on sample image and non-image datasets demonstrate that our method results in small networks with high accuracies. Finally, guided by matrix conditioning, we provide a method to effectively squeeze models that are already trained. Our techniques reduce the human cost of designing deep learning models and can also reduce training time and the expense of using neural networks for applications.
This page was built for publication: Refining the Structure of Neural Networks Using Matrix Conditioning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6323289)