Binary Classification of Gaussian Mixtures: Abundance of Support Vectors, Benign Overfitting, and Regularization
From MaRDI portal
Publication:5065474
DOI10.1137/21M1415121zbMath1493.62402arXiv2011.09148OpenAlexW3157298807WikidataQ114074045 ScholiaQ114074045MaRDI QIDQ5065474
No author found.
Publication date: 21 March 2022
Published in: SIAM Journal on Mathematics of Data Science (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2011.09148
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Pattern recognition, speech recognition (68T10)
Related Items (1)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- High-Dimensional Statistics
- High-Dimensional Probability
- Precise Error Analysis of Regularized <inline-formula> <tex-math notation="LaTeX">$M$ </tex-math> </inline-formula>-Estimators in High Dimensions
- Deep double descent: where bigger models and more data hurt*
- A random matrix analysis of random Fourier features: beyond the Gaussian kernel, a precise phase transition, and the corresponding double descent*
- Two Models of Double Descent for Weak Features
- Benign overfitting in linear regression
- Reconciling modern machine-learning practice and the classical bias–variance trade-off
- A modern maximum-likelihood theory for high-dimensional logistic regression
- Convexity, Classification, and Risk Bounds
This page was built for publication: Binary Classification of Gaussian Mixtures: Abundance of Support Vectors, Benign Overfitting, and Regularization