Is combining classifiers with stacking better than selecting the best one?
From MaRDI portal
Publication:703052
DOI10.1023/B:MACH.0000015881.36452.6ezbMath1101.68077MaRDI QIDQ703052
Publication date: 19 January 2005
Published in: Machine Learning (Search for Journal in Brave)
Learning and adaptive systems in artificial intelligence (68T05) Pattern recognition, speech recognition (68T10)
Related Items (15)
Extremizing and Antiextremizing in Bayesian Ensembles of Binary-Event Forecasts ⋮ A cooperative constructive method for neural networks for pattern recognition ⋮ The combination of multiple classifiers using an evidential reasoning approach ⋮ Weighted classifier ensemble based on quadratic form ⋮ Cascade interpolation learning with double subspaces and confidence disturbance for imbalanced problems ⋮ Boosting random subspace method ⋮ On hybrid classification using model assisted posterior estimates ⋮ A new hybrid ensemble machine-learning model for severity risk assessment and post-COVID prediction system ⋮ An empirical bias–variance analysis of DECORATE ensemble method at different training sample sizes ⋮ A probabilistic classifier ensemble weighting scheme based on cross-validated accuracy estimates ⋮ Improving malware detection by applying multi-inducer ensemble ⋮ Taxonomy for characterizing ensemble methods in classification tasks: a review and annotated bibliography ⋮ Supervised projection approach for boosting classifiers ⋮ An analytical toast to wine: Using stacked generalization to predict wine preference ⋮ Dynamic Latent Class Model Averaging for Online Prediction
This page was built for publication: Is combining classifiers with stacking better than selecting the best one?