On Using Principal Components Before Separating a Mixture of Two Multivariate Normal Distributions
From MaRDI portal
Publication:3324854
DOI10.2307/2347949zbMath0538.62050OpenAlexW109865423MaRDI QIDQ3324854
Publication date: 1983
Published in: Applied Statistics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.2307/2347949
clusteringeigenvaluesdistancescalingprincipal componentsmixture of two multivariate normal distributionsselection of components
Factor analysis and principal components; correspondence analysis (62H25) Classification and discrimination; cluster analysis (statistical aspects) (62H30)
Related Items (42)
Breaking the curse of dimensionality: hierarchical Bayesian network model for multi-view clustering ⋮ Model-based clustering of high-dimensional data: a review ⋮ High-dimensional clustering via random projections ⋮ Robust variable selection for model-based learning in presence of adulteration ⋮ Dimensionality reduction on the Cartesian product of embeddings of multiple dissimilarity matrices ⋮ Component reduction in linear discriminant analysis ⋮ Sparse optimal discriminant clustering ⋮ Classification for high-throughput data with an optimal subset of principal components ⋮ Better alternatives to current methods of scaling and weighting data for cluster analysis ⋮ Addressing overfitting and underfitting in Gaussian model-based clustering ⋮ Strong Consistency of ReducedK-means Clustering ⋮ Block-Wise Variable Selection for Clustering Via Latent States of Mixture Models ⋮ Density-based silhouette diagnostics for clustering methods ⋮ Distance Metrics and Clustering Methods for Mixed‐type Data ⋮ Theoretical and practical considerations on the convergence properties of the Fisher-EM algorithm ⋮ Finite mixtures of matrix normal distributions for classifying three-way data ⋮ An algorithm for deciding the number of clusters and validation using simulated data with application to exploring crop population structure ⋮ A simulation study of the use of principal components in linear discriminant analysis ⋮ Model-Based and Nonparametric Approaches to Clustering for Data Compression in Actuarial Applications ⋮ Issues of robustness and high dimensionality in cluster analysis ⋮ Using projection-based clustering to find distance- and density-based clusters in high-dimensional data ⋮ Robust, fuzzy, and parsimonious clustering, based on mixtures of factor analyzers ⋮ Variable selection methods for model-based clustering ⋮ Sparse clustering of functional data ⋮ An extension of multiple correspondence analysis for identifying heterogeneous subgroups of respondents ⋮ Variable selection and updating in model-based discriminant analysis for high dimensional data with food authenticity applications ⋮ Order-constrained solutions in \(K\)-means clustering: even better than being globally optimal ⋮ Efficient regularized spectral data embedding ⋮ Modelling high-dimensional data by mixtures of factor analyzers ⋮ Optimal classification for time-course gene expression data using functional data analysis ⋮ Projection Pursuit Based on Gaussian Mixtures and Evolutionary Algorithms ⋮ Feature screening in large scale cluster analysis ⋮ Biplot methodology in exploratory analysis of microarray data ⋮ Reducing data dimension for cluster detection ⋮ Use of SVD-based probit transformation in clustering gene expression profiles ⋮ A Bayesian Fisher-EM algorithm for discriminative Gaussian subspace clustering ⋮ The Mixturegram: A Visualization Tool for Assessing the Number of Components in Finite Mixture Models ⋮ Clustering Categorical Data via Ensembling Dissimilarity Matrices ⋮ Nonparametric semi-supervised classification with application to signal detection in high energy physics ⋮ A simple model‐based approach to variable selection in classification and clustering ⋮ Strong consistency of factorial \(k\)-means clustering ⋮ Lowdimensional additive overlapping clustering
This page was built for publication: On Using Principal Components Before Separating a Mixture of Two Multivariate Normal Distributions