Scale-invariant representation of machine learning
From MaRDI portal
Publication:6377021
arXiv2109.02914MaRDI QIDQ6377021
Author name not available (Why is that?)
Publication date: 7 September 2021
Abstract: The success of machine learning has resulted from its structured representation of data. Similar data have close internal representations as compressed codes for classification or emerged labels for clustering. We observe that the frequency of internal codes or labels follows power laws in both supervised and unsupervised learning models. This scale-invariant distribution implies that machine learning largely compresses frequent typical data, and simultaneously, differentiates many atypical data as outliers. In this study, we derive the process by which these power laws can naturally arise in machine learning. In terms of information theory, the scale-invariant representation corresponds to a maximally uncertain data grouping among possible representations that guarantee a given learning accuracy.
Has companion code repository: https://github.com/sungyeop/powerlaw_ml
This page was built for publication: Scale-invariant representation of machine learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6377021)