Margin Maximization as Lossless Maximal Compression
From MaRDI portal
Publication:6333634
arXiv2001.10318MaRDI QIDQ6333634
Gavin Brown, Henry Reeve, Nikolaos Nikolaou
Publication date: 28 January 2020
Abstract: The ultimate goal of a supervised learning algorithm is to produce models constructed on the training data that can generalize well to new examples. In classification, functional margin maximization -- correctly classifying as many training examples as possible with maximal confidence --has been known to construct models with good generalization guarantees. This work gives an information-theoretic interpretation of a margin maximizing model on a noiseless training dataset as one that achieves lossless maximal compression of said dataset -- i.e. extracts from the features all the useful information for predicting the label and no more. The connection offers new insights on generalization in supervised machine learning, showing margin maximization as a special case (that of classification) of a more general principle and explains the success and potential limitations of popular learning algorithms like gradient boosting. We support our observations with theoretical arguments and empirical evidence and identify interesting directions for future work.
Has companion code repository: https://github.com/nnikolaou/margin_maximization_LMC
This page was built for publication: Margin Maximization as Lossless Maximal Compression
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6333634)