Increasing and decreasing returns and losses in mutual information feature subset selection
DOI10.3390/e12102144zbMath1229.62005OpenAlexW1983091269MaRDI QIDQ657494
Marc M. Van Hulle, Gert Van Dijck
Publication date: 9 January 2012
Published in: Entropy (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.3390/e12102144
Bayesian networksconditional entropyconditional mutual informationfeature subset selectionbit paritydecreasing lossesincreasing losses
Bayesian inference (62F15) Theory of languages and software systems (knowledge-based systems, expert systems, etc.) for artificial intelligence (68T35) Statistical aspects of information-theoretic topics (62B10)
Related Items (2)
Cites Work
- Unnamed Item
- Unnamed Item
- Computer-Automated Design of Multifont Print Recognition Logic
- Relations between entropy and error probability
- Efficient selection of discriminative genes from microarray gene expression data for cancer diagnosis
- 10.1162/153244303322753616
- A Programmed Algorithm for Designing Multifont Character Recognition Logics
- Computational Methods of Feature Selection
- Probability of error, equivocation, and the Chernoff bound
This page was built for publication: Increasing and decreasing returns and losses in mutual information feature subset selection