A Finite-Sample, Distribution-Free, Probabilistic Lower Bound on Mutual Information
From MaRDI portal
Publication:5198608
DOI10.1162/NECO_a_00144zbMath1248.94013WikidataQ51579008 ScholiaQ51579008MaRDI QIDQ5198608
Arunava Banerjee, Nathan D. Vanderkraats
Publication date: 9 August 2011
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: http://www.mitpressjournals.org/doi/abs/10.1162/NECO_a_00144
Measures of information, entropy (94A17) Channel models (including quantum) in information and communication theory (94A40) Communication theory (94A05)
Cites Work
- Unnamed Item
- Unnamed Item
- The tight constant in the Dvoretzky-Kiefer-Wolfowitz inequality
- Encoding stimulus information by spike numbers and mean response time in primary auditory cortex
- Estimating Entropy Rates with Bayesian Confidence Intervals
- Asymptotic Minimax Character of the Sample Distribution Function and of the Classical Multinomial Estimator
- A Probabilistic Upper Bound on Differential Entropy
- Information Loss in an Optimal Maximum Likelihood Decoding
- Spiking Neuron Models
- Estimation of Entropy and Mutual Information
- Anthropic Correction of Information Estimates and Its Application to Neural Coding
- Probability Inequalities for Sums of Bounded Random Variables
- Tight Data-Robust Bounds to Mutual Information Combining Shuffling and Model Selection Techniques
- Estimating Information Rates with Confidence Intervals in Neural Spike Trains
- Elements of Information Theory
- Heuristic Approach to the Kolmogorov-Smirnov Theorems
This page was built for publication: A Finite-Sample, Distribution-Free, Probabilistic Lower Bound on Mutual Information