CCMI : Classifier based Conditional Mutual Information Estimation

From MaRDI portal
Publication:6320000

arXiv1906.01824MaRDI QIDQ6320000

Author name not available (Why is that?)

Publication date: 5 June 2019

Abstract: Conditional Mutual Information (CMI) is a measure of conditional dependence between random variables X and Y, given another random variable Z. It can be used to quantify conditional dependence among variables in many data-driven inference problems such as graphical models, causal learning, feature selection and time-series analysis. While k-nearest neighbor (kNN) based estimators as well as kernel-based methods have been widely used for CMI estimation, they suffer severely from the curse of dimensionality. In this paper, we leverage advances in classifiers and generative models to design methods for CMI estimation. Specifically, we introduce an estimator for KL-Divergence based on the likelihood ratio by training a classifier to distinguish the observed joint distribution from the product distribution. We then show how to construct several CMI estimators using this basic divergence estimator by drawing ideas from conditional generative models. We demonstrate that the estimates from our proposed approaches do not degrade in performance with increasing dimension and obtain significant improvement over the widely used KSG estimator. Finally, as an application of accurate CMI estimation, we use our best estimator for conditional independence testing and achieve superior performance than the state-of-the-art tester on both simulated and real data-sets.




Has companion code repository: https://github.com/sudiptodip15/ccmi








This page was built for publication: CCMI : Classifier based Conditional Mutual Information Estimation

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6320000)