Learning with mitigating random consistency from the accuracy measure
From MaRDI portal
Publication:2217414
DOI10.1007/s10994-020-05914-3OpenAlexW3095581905MaRDI QIDQ2217414
Yuhua Qian, Feijiang Li, Jieting Wang
Publication date: 29 December 2020
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10994-020-05914-3
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On similarity indices and correction for chance agreement
- One-pass AUC optimization
- Optimal correction for guessing in multiple-choice tests
- Ensembling neural networks: Many could be better than all
- Optimizing non-decomposable measures with deep networks
- Correcting Jaccard and other similarity indices for chance agreement in cluster analysis
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Multigranulation information fusion: a Dempster-Shafer evidence theory-based clustering ensemble method
- Clustering ensemble based on sample's stability
- Surrogate regret bounds for generalized classification performance metrics
- Consistency of Support Vector Machines and Other Regularized Kernel Classifiers
- A theory of the learnable
- 10.1162/153244303321897690
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Convexity, Classification, and Risk Bounds
- Measures of Association for Cross Classifications
This page was built for publication: Learning with mitigating random consistency from the accuracy measure