Improving the Validity of Decision Trees as Explanations
From MaRDI portal
Publication:6439965
arXiv2306.06777MaRDI QIDQ6439965
Author name not available (Why is that?)
Publication date: 11 June 2023
Abstract: In classification and forecasting with tabular data, one often utilizes tree-based models. This can be competitive with deep neural networks on tabular data [cf. Grinsztajn et al., NeurIPS 2022, arXiv:2207.08815] and, under some conditions, explainable. The explainability depends on the depth of the tree and the accuracy in each leaf of the tree. Here, we train a low-depth tree with the objective of minimising the maximum misclassification error across each leaf node, and then ``suspend further tree-based models (e.g., trees of unlimited depth) from each leaf of the low-depth tree. The low-depth tree is easily explainable, while the overall statistical performance of the combined low-depth and suspended tree-based models improves upon decision trees of unlimited depth trained using classical methods (e.g., CART) and is comparable to state-of-the-art methods (e.g., well-tuned XGBoost).
Has companion code repository: https://github.com/Epanemu/FCT
This page was built for publication: Improving the Validity of Decision Trees as Explanations
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6439965)