DiME: Maximizing Mutual Information by a Difference of Matrix-Based Entropies

From MaRDI portal
Publication:6423845

arXiv2301.08164MaRDI QIDQ6423845

Author name not available (Why is that?)

Publication date: 19 January 2023

Abstract: We introduce an information-theoretic quantity with similar properties to mutual information that can be estimated from data without making explicit assumptions on the underlying distribution. This quantity is based on a recently proposed matrix-based entropy that uses the eigenvalues of a normalized Gram matrix to compute an estimate of the eigenvalues of an uncentered covariance operator in a reproducing kernel Hilbert space. We show that a difference of matrix-based entropies (DiME) is well suited for problems involving the maximization of mutual information between random variables. While many methods for such tasks can lead to trivial solutions, DiME naturally penalizes such outcomes. We compare DiME to several baseline estimators of mutual information on a toy Gaussian dataset. We provide examples of use cases for DiME, such as latent factor disentanglement and a multiview representation learning problem where DiME is used to learn a shared representation among views with high mutual information.




Has companion code repository: https://github.com/uk-cliplab/DiME








This page was built for publication: DiME: Maximizing Mutual Information by a Difference of Matrix-Based Entropies

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6423845)