Understanding the Limitations of Variational Mutual Information Estimators

From MaRDI portal
Publication:6327164

arXiv1910.06222MaRDI QIDQ6327164

Author name not available (Why is that?)

Publication date: 14 October 2019

Abstract: Variational approaches based on neural networks are showing promise for estimating mutual information (MI) between high dimensional variables. However, they can be difficult to use in practice due to poorly understood bias/variance tradeoffs. We theoretically show that, under some conditions, estimators such as MINE exhibit variance that could grow exponentially with the true amount of underlying MI. We also empirically demonstrate that existing estimators fail to satisfy basic self-consistency properties of MI, such as data processing and additivity under independence. Based on a unified perspective of variational approaches, we develop a new estimator that focuses on variance reduction. Empirical results on standard benchmark tasks demonstrate that our proposed estimator exhibits improved bias-variance trade-offs on standard benchmark tasks.




Has companion code repository: https://github.com/ermongroup/smile-mi-estimator








This page was built for publication: Understanding the Limitations of Variational Mutual Information Estimators

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6327164)