Also for \(k\)-means: more data does not imply better performance
From MaRDI portal
Publication:6134361
DOI10.1007/s10994-023-06361-6zbMath1518.68310OpenAlexW4385235341MaRDI QIDQ6134361
Manuele Bicego, Jesse H. Krijthe, Marco Loog
Publication date: 22 August 2023
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10994-023-06361-6
monotonicity\(k\)-means algorithmlearning curve\(k\)-means clusteringperformance improvementsmartness
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Fast rates for empirical vector quantization
- NP-hardness of Euclidean sum-of-squares clustering
- Strong consistency of k-means clustering
- Estimation of dependences based on empirical data. Transl. from the Russian by Samuel Kotz
- Asymptotic distributions for clustering criteria
- Robust \(k\)-means clustering for distributions with two moments
- Nonasymptotic bounds for vector quantization in Hilbert spaces
- A Vector-Contraction Inequality for Rademacher Complexities
- Adaptive Noisy Clustering
- On the Performance of Clustering in Hilbert Spaces
- Bayesian k-Means as a “Maximization-Expectation” Algorithm
- Learning Theory
- 10.1162/153244302760200678
- Reconciling modern machine-learning practice and the classical bias–variance trade-off
- Clustering Methods: A History of k-Means Algorithms
- A Sober Look at Clustering Stability
This page was built for publication: Also for \(k\)-means: more data does not imply better performance