On Using Principal Components Before Separating a Mixture of Two Multivariate Normal Distributions

From MaRDI portal
Publication:3324854

DOI10.2307/2347949zbMath0538.62050OpenAlexW109865423MaRDI QIDQ3324854

Wei-Chien Chang

Publication date: 1983

Published in: Applied Statistics (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.2307/2347949




Related Items (42)

Breaking the curse of dimensionality: hierarchical Bayesian network model for multi-view clusteringModel-based clustering of high-dimensional data: a reviewHigh-dimensional clustering via random projectionsRobust variable selection for model-based learning in presence of adulterationDimensionality reduction on the Cartesian product of embeddings of multiple dissimilarity matricesComponent reduction in linear discriminant analysisSparse optimal discriminant clusteringClassification for high-throughput data with an optimal subset of principal componentsBetter alternatives to current methods of scaling and weighting data for cluster analysisAddressing overfitting and underfitting in Gaussian model-based clusteringStrong Consistency of ReducedK-means ClusteringBlock-Wise Variable Selection for Clustering Via Latent States of Mixture ModelsDensity-based silhouette diagnostics for clustering methodsDistance Metrics and Clustering Methods for Mixed‐type DataTheoretical and practical considerations on the convergence properties of the Fisher-EM algorithmFinite mixtures of matrix normal distributions for classifying three-way dataAn algorithm for deciding the number of clusters and validation using simulated data with application to exploring crop population structureA simulation study of the use of principal components in linear discriminant analysisModel-Based and Nonparametric Approaches to Clustering for Data Compression in Actuarial ApplicationsIssues of robustness and high dimensionality in cluster analysisUsing projection-based clustering to find distance- and density-based clusters in high-dimensional dataRobust, fuzzy, and parsimonious clustering, based on mixtures of factor analyzersVariable selection methods for model-based clusteringSparse clustering of functional dataAn extension of multiple correspondence analysis for identifying heterogeneous subgroups of respondentsVariable selection and updating in model-based discriminant analysis for high dimensional data with food authenticity applicationsOrder-constrained solutions in \(K\)-means clustering: even better than being globally optimalEfficient regularized spectral data embeddingModelling high-dimensional data by mixtures of factor analyzersOptimal classification for time-course gene expression data using functional data analysisProjection Pursuit Based on Gaussian Mixtures and Evolutionary AlgorithmsFeature screening in large scale cluster analysisBiplot methodology in exploratory analysis of microarray dataReducing data dimension for cluster detectionUse of SVD-based probit transformation in clustering gene expression profilesA Bayesian Fisher-EM algorithm for discriminative Gaussian subspace clusteringThe Mixturegram: A Visualization Tool for Assessing the Number of Components in Finite Mixture ModelsClustering Categorical Data via Ensembling Dissimilarity MatricesNonparametric semi-supervised classification with application to signal detection in high energy physicsA simple model‐based approach to variable selection in classification and clusteringStrong consistency of factorial \(k\)-means clusteringLowdimensional additive overlapping clustering




This page was built for publication: On Using Principal Components Before Separating a Mixture of Two Multivariate Normal Distributions