A Measure of the Complexity of Neural Representations based on Partial Information Decomposition
From MaRDI portal
Publication:6411494
arXiv2209.10438MaRDI QIDQ6411494
Author name not available (Why is that?)
Publication date: 21 September 2022
Abstract: In neural networks, task-relevant information is represented jointly by groups of neurons. However, the specific way in which this mutual information about the classification label is distributed among the individual neurons is not well understood: While parts of it may only be obtainable from specific single neurons, other parts are carried redundantly or synergistically by multiple neurons. We show how Partial Information Decomposition (PID), a recent extension of information theory, can disentangle these different contributions. From this, we introduce the measure of "Representational Complexity", which quantifies the difficulty of accessing information spread across multiple neurons. We show how this complexity is directly computable for smaller layers. For larger layers, we propose subsampling and coarse-graining procedures and prove corresponding bounds on the latter. Empirically, for quantized deep neural networks solving the MNIST and CIFAR10 tasks, we observe that representational complexity decreases both through successive hidden layers and over training, and compare the results to related measures. Overall, we propose representational complexity as a principled and interpretable summary statistic for analyzing the structure and evolution of neural representations and complex systems in general.
Has companion code repository: https://github.com/priesemann-group/nninfo
No records found.
This page was built for publication: A Measure of the Complexity of Neural Representations based on Partial Information Decomposition
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6411494)