A new nearest neighbor classification algorithm based on local probability centers (Q1718136)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: A new nearest neighbor classification algorithm based on local probability centers |
scientific article; zbMATH DE number 7016175
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | A new nearest neighbor classification algorithm based on local probability centers |
scientific article; zbMATH DE number 7016175 |
Statements
A new nearest neighbor classification algorithm based on local probability centers (English)
0 references
8 February 2019
0 references
Summary: The nearest neighbor is one of the most popular classifiers, and it has been successfully used in pattern recognition and machine learning. One drawback of \(k\)NN is that it performs poorly when class distributions are overlapping. Recently, local probability center (LPC) algorithm is proposed to solve this problem; its main idea is giving weight to samples according to their posterior probability. However, LPC performs poorly when the value of \(k\) is very small and the higher-dimensional datasets are used. To deal with this problem, this paper suggests that the gradient of the posterior probability function can be estimated under sufficient assumption. The theoretic property is beneficial to faithfully calculate the inner product of two vectors. To increase the performance in high-dimensional datasets, the multidimensional Parzen window and Euler-Richardson method are utilized, and a new classifier based on local probability centers is developed in this paper. Experimental results show that the proposed method yields stable performance with a wide range of \(k\) for usage, robust performance to overlapping issue, and good performance to dimensionality. The proposed theorem can be applied to mathematical problems and other applications. Furthermore, the proposed method is an attractive classifier because of its simplicity.
0 references
0 references