Double data piling: a high-dimensional solution for asymptotically perfect multi-category classification
From MaRDI portal
Publication:6643296
DOI10.1007/s42952-024-00263-6MaRDI QIDQ6643296
Jeongyoun Ahn, Woonyoung Chang, Sungkyu Jung, Tae Hyun Kim
Publication date: 26 November 2024
Published in: Journal of the Korean Statistical Society (Search for Journal in Brave)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distance-Weighted Discrimination
- Boundary behavior in high dimension, low sample size asymptotics of PCA
- PCA consistency in high dimension, low sample size context
- Continuum directions for supervised dimension reduction
- Sparse HDLSS discrimination with constrained data piling
- On the distribution of the largest eigenvalue in principal components analysis
- Asymptotics of empirical eigenstructure for high dimensional spiked covariance
- Subspace rotations for high-dimensional outlier detection
- Double data piling leads to perfect classification
- Surprises in high-dimensional ridgeless least squares interpolation
- A General Framework For Consistency of Principal Component Analysis
- On Strong Mixing Conditions for Stationary Gaussian Processes
- Two Models of Double Descent for Weak Features
- Distance-based outlier detection for high dimension, low sample size data
- On the proliferation of support vectors in high dimensions*
- Binary Classification of Gaussian Mixtures: Abundance of Support Vectors, Benign Overfitting, and Regularization
- Adjusting systematic bias in high dimensional principal component scores
- The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve
- Benign overfitting in linear regression
- Reconciling modern machine-learning practice and the classical bias–variance trade-off
- Weighted Distance Weighted Discrimination and Its Asymptotic Properties
- The maximal data piling direction for discrimination
- Geometric Representation of High Dimension, Low Sample Size Data
- OUP accepted manuscript
- The high-dimension, low-sample-size geometric representation holds under mild conditions
- Achieving near Perfect Classification for Functional Data
- Deep learning: a statistical viewpoint
- Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation
- Clustering high dimension, low sample size data using the maximal data piling distance
- Optimal Linear Discriminant Analysis for High-Dimensional Functional Data
This page was built for publication: Double data piling: a high-dimensional solution for asymptotically perfect multi-category classification