Learning algorithms may perform worse with increasing training set size: algorithm-data incompatibility
DOI10.1016/j.csda.2013.05.021zbMath1506.62203OpenAlexW2028011900MaRDI QIDQ1623460
Waleed A. Yousef, Subrata Kundu
Publication date: 23 November 2018
Published in: Computational Statistics and Data Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.csda.2013.05.021
convergencepattern recognitionmachine learningstatistical learningstable distributionstochastic concentration
Infinitely divisible distributions; stable distributions (60E07) Computational methods for problems pertaining to statistics (62-08) Point estimation (62F10) Learning and adaptive systems in artificial intelligence (68T05) Statistical aspects of big data and data science (62R07)
Uses Software
Cites Work
- Sample size planning for developing classifiers using high-dimensional DNA microarray data
- Linear and nonlinear regression with stable errors
- Multivariate stable densities as functions of one dimensional projections
- Classification rules for stable distributions
- Maximum likelihood estimators in regression models with infinite variance innovations
- Conditional distributions and characterizations of multivariate stable distribution
- Estimating the center of symmetry: is it always better to use larger sample sizes?
- On Dimensionality, Sample Size, Classification Error, and Complexity of Classification Algorithm in Pattern Recognition
- Probability: A Graduate Course
- Density parameter estimation of skewed α-stable distributions
- The elements of statistical learning. Data mining, inference, and prediction
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: Learning algorithms may perform worse with increasing training set size: algorithm-data incompatibility