Performance of Johnson--Lindenstrauss Transform for $k$-Means and $k$-Medians Clustering
From MaRDI portal
Publication:5067443
DOI10.1137/20M1330701zbMath1483.68311OpenAlexW2899746470MaRDI QIDQ5067443
Ilya Razenshteyn, Yury Makarychev, Konstantin Makarychev
Publication date: 1 April 2022
Published in: SIAM Journal on Computing (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/20m1330701
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05) Metric embeddings as related to computational problems and algorithms (68R12)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Probability in Banach spaces. Isoperimetry and processes
- On general minimax theorems
- Fast dimension reduction using Rademacher series on dual BCH codes
- Database-friendly random projections: Johnson-Lindenstrauss with binary coins.
- Problems and results in extremal combinatorics. I.
- Adaptive estimation of a quadratic functional by model selection.
- Empirical processes and random projections
- A sparse Johnson
- Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform
- An Almost Optimal Unrestricted Fast Johnson-Lindenstrauss Transform
- Dimensionality Reduction for k-Means Clustering and Low Rank Approximation
- Randomized Dimensionality Reduction for <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula>-Means Clustering
- New and Improved Johnson–Lindenstrauss Embeddings via the Restricted Isometry Property
- Sparser Johnson-Lindenstrauss Transforms
- METRIC DIMENSION REDUCTION: A SNAPSHOT OF THE RIBE PROGRAM
- Extensions of Lipschitz mappings into a Hilbert space
- Optimal Approximate Matrix Product in Terms of Stable Rank
- High-Dimensional Probability
- Least squares quantization in PCM
- An elementary proof of a theorem of Johnson and Lindenstrauss
- Oblivious dimension reduction for k -means: beyond subspaces and the Johnson-Lindenstrauss lemma
- Deterministic Feature Selection for K-Means Clustering
- New constructions of RIP matrices with fast multiplication and fewer rows
- Turning Big data into tiny data: Constant-size coresets for k-means, PCA and projective clustering