Consistent Feature Selection for Analytic Deep Neural Networks
From MaRDI portal
Publication:6351455
arXiv2010.08097MaRDI QIDQ6351455
Author name not available (Why is that?)
Publication date: 15 October 2020
Abstract: One of the most important steps toward interpretability and explainability of neural network models is feature selection, which aims to identify the subset of relevant features. Theoretical results in the field have mostly focused on the prediction aspect of the problem with virtually no work on feature selection consistency for deep neural networks due to the model's severe nonlinearity and unidentifiability. This lack of theoretical foundation casts doubt on the applicability of deep learning to contexts where correct interpretations of the features play a central role. In this work, we investigate the problem of feature selection for analytic deep networks. We prove that for a wide class of networks, including deep feed-forward neural networks, convolutional neural networks, and a major sub-class of residual neural networks, the Adaptive Group Lasso selection procedure with Group Lasso as the base estimator is selection-consistent. The work provides further evidence that Group Lasso might be inefficient for feature selection with neural networks and advocates the use of Adaptive Group Lasso over the popular Group Lasso.
Has companion code repository: https://github.com/vucdinh/alg-net
This page was built for publication: Consistent Feature Selection for Analytic Deep Neural Networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6351455)