A distance-based, misclassification rate adjusted classifier for multiclass, high-dimensional data
From MaRDI portal
Publication:741160
DOI10.1007/s10463-013-0435-8zbMath1309.62108OpenAlexW2071988253MaRDI QIDQ741160
Kazuyoshi Yata, Makoto Aoshima
Publication date: 10 September 2014
Published in: Annals of the Institute of Statistical Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10463-013-0435-8
Asymptotic distribution theory in statistics (62E20) Classification and discrimination; cluster analysis (statistical aspects) (62H30)
Related Items
Adaptive threshold-based classification of sparse high-dimensional data, Support vector machine and optimal parameter selection for high-dimensional imbalanced data, Geometric Classifier for Multiclass, High-Dimensional Data, A classifier under the strongly spiked eigenvalue model in high-dimension, low-sample-size context, High dimensional asymptotics for the naive Hotelling T2 statistic in pattern recognition, A \(U\)-classifier for high-dimensional data under non-normality, Statistical inference for high-dimension, low-sample-size data, Robust support vector machine for high-dimensional imbalanced data, Support vector machine and its bias correction in high-dimension, low-sample-size settings, PCA consistency for the power spiked model in high-dimensional settings, Asymptotic properties of hierarchical clustering in high-dimensional settings, Unnamed Item, A test of sphericity for high-dimensional data and its application for detection of divergently spiked noise, Asymptotic properties of the misclassification rates for Euclidean distance discriminant rule in high-dimensional data, A survey of high dimension low sample size asymptotics, Binary discrimination methods for high-dimensional data with a geometric representation, Distance-based classifier by data transformation for high-dimension, strongly spiked eigenvalue models, Bias-corrected support vector machine with Gaussian kernel in high-dimension, low-sample-size settings, On the dimension effect of regularized linear discriminant analysis, Kick-one-out-based variable selection method for Euclidean distance-based classifier in high-dimensional settings, On simultaneous confidence interval estimation for the difference of paired mean vectors in high-dimensional settings, Clustering by principal component analysis with Gaussian kernel in high-dimension, low-sample-size settings, Geometric classifiers for high-dimensional noisy data, Asymptotic properties of distance-weighted discrimination and its bias correction for high-dimension, low-sample-size data, An asymptotic expansion for the distribution of Euclidean distance-based discriminant function in normal populations, Inference on high-dimensional mean vectors under the strongly spiked eigenvalue model
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distance-Weighted Discrimination
- Correlation tests for high-dimensional data using extended cross-data-matrix methodology
- Asymptotic expansion of the misclassification probabilities of D- and A- criteria for discrimination from two high dimensional populations using the theory of large dimensional random matrices
- Effective PCA for high-dimension, low-sample-size data with noise reduction via geometric representations
- Effective PCA for high-dimension, low-sample-size data with singular value decomposition of cross data matrix
- PCA consistency in high dimension, low sample size context
- Some theory for Fisher's linear discriminant function, `naive Bayes', and some alternatives when there are many more variables than observations
- Dependent central limit theorems and invariance principles
- On the distribution of the largest eigenvalue in principal components analysis
- A two-sample test for high-dimensional data with applications to gene-set testing
- Eigenvalues of large sample covariance matrices of spiked population models
- Bias-Corrected Diagonal Discriminant Rules for High-Dimensional Classification
- Two-Stage Procedures for High-Dimensional Data
- Authors' Response
- Theoretical Measures of Relative Performance of Classifiers for High Dimensional Data with Small Sample Sizes
- Scale adjustments for classifiers in high-dimensional, low sample size settings
- PCA Consistency for Non-Gaussian Data in High Dimension, Low Sample Size Context
- Comparison of Discrimination Methods for the Classification of Tumors Using Gene Expression Data
- Geometric Representation of High Dimension, Low Sample Size Data
- The high-dimension, low-sample-size geometric representation holds under mild conditions