10.1162/153244303322753742
From MaRDI portal
Publication:4827825
DOI10.1162/153244303322753742zbMath1102.68638OpenAlexW4251171122MaRDI QIDQ4827825
Publication date: 23 November 2004
Published in: CrossRef Listing of Deleted DOIs (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/153244303322753742
Learning and adaptive systems in artificial intelligence (68T05) Pattern recognition, speech recognition (68T10) Measures of information, entropy (94A17)
Related Items (38)
Probabilistic Learning Vector Quantization with Cross-Entropy for Probabilistic Class Assignments in Classification Learning ⋮ Information-theoretic approaches to SVM feature selection for metagenome read classification ⋮ Information enhancement for interpreting competitive learning ⋮ Time-efficient estimation of conditional mutual information for variable selection in classification ⋮ A novel image thresholding method based on Parzen window estimate ⋮ Canonical kernel dimension reduction ⋮ Embedded variable selection method using signomial classification ⋮ Similarity interaction in information-theoretic self-organizing maps ⋮ Sufficient Dimension Reduction via Squared-Loss Mutual Information Estimation ⋮ Linear feature-weighted support vector machine ⋮ Overfitting in linear feature extraction for classification of high-dimensional image data ⋮ Exploring the role of graph spectra in graph coloring algorithm performance ⋮ A linear discriminant analysis method based on mutual information maximization ⋮ A compact local binary pattern using maximization of mutual information for face analysis ⋮ Regularized discriminant entropy analysis ⋮ Aspects in classification learning -- review of recent developments in learning vector quantization ⋮ Appropriate Data Density Models in Probabilistic Machine Learning Approaches for Data Analysis ⋮ Density-Difference Estimation ⋮ Direct Estimation of the Derivative of Quadratic Mutual Information with Application in Supervised Dimension Reduction ⋮ Information theoretic hierarchical clustering ⋮ Divergence-Based Vector Quantization ⋮ Approximate information discriminant analysis: A computationally simple heteroscedastic feature extraction technique ⋮ Improved learning of Riemannian metrics for exploratory analysis ⋮ Spectral feature projections that maximize Shannon mutual information with class labels ⋮ Supervised principal component analysis: visualization, classification and regression on subspaces and submanifolds ⋮ A Conditional Entropy Minimization Criterion for Dimensionality Reduction and Multiple Kernel Learning ⋮ Comparison of relevance learning vector quantization with other metric adaptive classification methods ⋮ A Regularized Correntropy Framework for Robust Pattern Recognition ⋮ Robust Independent Component Analysis Using Quadratic Negentropy ⋮ Simplified information maximization for improving generalization performance in multilayered neural networks ⋮ Selective ensemble of SVDDs with Renyi entropy based diversity measure ⋮ An efficient discriminant-based solution for small sample size problem ⋮ An estimate of mutual information that permits closed-form optimisation ⋮ Information Enhancement Learning: Local Enhanced Information to Detect the Importance of Input Variables in Competitive Learning ⋮ Constrained information maximization by free energy minimization ⋮ Information-Theoretic Representation Learning for Positive-Unlabeled Classification ⋮ A sequential ensemble clusterings generation algorithm for mixed data ⋮ Comprehensibility maximization and humanly comprehensible representations
Uses Software
This page was built for publication: 10.1162/153244303322753742