Exploiting label dependencies for improved sample complexity
DOI10.1007/s10994-012-5312-9zbMath1273.68294OpenAlexW2034845197MaRDI QIDQ374170
Lior Rokach, Dan Gutfreund, Lena Chekina, Bracha Shapira, Leonid (Aryeh) Kontorovich
Publication date: 22 October 2013
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10994-012-5312-9
multi-label classificationartificial datasetsconditional and unconditional label dependenceempirical experimentensemble learning algorithmsensemble models diversitygeneralization boundsmulti-label evaluation measures
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (1)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On label dependence and loss minimization in multi-label classification
- Feature selection for multi-label naive Bayes classification
- The VC dimension of \(k\)-fold union
- \(k\)-Fold unions of low-dimensional concept classes
- Wrappers for feature subset selection
- Toward efficient agnostic learning
- BoosTexter: A boosting-based system for text categorization
- Genetic algorithm-based feature set partitioning for classification problems
- Learnability and the Vapnik-Chervonenkis dimension
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Convergence of stochastic processes
This page was built for publication: Exploiting label dependencies for improved sample complexity