One-Nearest-Neighbor Search is All You Need for Minimax Optimal Regression and Classification
From MaRDI portal
Publication:6390199
arXiv2202.02464MaRDI QIDQ6390199
Author name not available (Why is that?)
Publication date: 4 February 2022
Abstract: Recently, Qiao, Duan, and Cheng~(2019) proposed a distributed nearest-neighbor classification method, in which a massive dataset is split into smaller groups, each processed with a -nearest-neighbor classifier, and the final class label is predicted by a majority vote among these groupwise class labels. This paper shows that the distributed algorithm with over a sufficiently large number of groups attains a minimax optimal error rate up to a multiplicative logarithmic factor under some regularity conditions, for both regression and classification problems. Roughly speaking, distributed 1-nearest-neighbor rules with groups has a performance comparable to standard -nearest-neighbor rules. In the analysis, alternative rules with a refined aggregation method are proposed and shown to attain exact minimax optimal rates.
Has companion code repository: https://github.com/jongharyu/split-knn-rules
This page was built for publication: One-Nearest-Neighbor Search is All You Need for Minimax Optimal Regression and Classification
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6390199)