Learning Optimal Fair Classification Trees: Trade-offs Between Interpretability, Fairness, and Accuracy

From MaRDI portal
Publication:6389047

arXiv2201.09932MaRDI QIDQ6389047

Author name not available (Why is that?)

Publication date: 24 January 2022

Abstract: The increasing use of machine learning in high-stakes domains -- where people's livelihoods are impacted -- creates an urgent need for interpretable, fair, and highly accurate algorithms. With these needs in mind, we propose a mixed integer optimization (MIO) framework for learning optimal classification trees -- one of the most interpretable models -- that can be augmented with arbitrary fairness constraints. In order to better quantify the "price of interpretability", we also propose a new measure of model interpretability called decision complexity that allows for comparisons across different classes of machine learning models. We benchmark our method against state-of-the-art approaches for fair classification on popular datasets; in doing so, we conduct one of the first comprehensive analyses of the trade-offs between interpretability, fairness, and predictive accuracy. Given a fixed disparity threshold, our method has a price of interpretability of about 4.2 percentage points in terms of out-of-sample accuracy compared to the best performing, complex models. However, our method consistently finds decisions with almost full parity, while other methods rarely do.




Has companion code repository: https://github.com/d3m-research-group/odtlearn








This page was built for publication: Learning Optimal Fair Classification Trees: Trade-offs Between Interpretability, Fairness, and Accuracy

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6389047)