Decision Trees for Decision-Making under the Predict-then-Optimize Framework

From MaRDI portal
Publication:6335892

arXiv2003.00360MaRDI QIDQ6335892

Author name not available (Why is that?)

Publication date: 29 February 2020

Abstract: We consider the use of decision trees for decision-making problems under the predict-then-optimize framework. That is, we would like to first use a decision tree to predict unknown input parameters of an optimization problem, and then make decisions by solving the optimization problem using the predicted parameters. A natural loss function in this framework is to measure the suboptimality of the decisions induced by the predicted input parameters, as opposed to measuring loss using input parameter prediction error. This natural loss function is known in the literature as the Smart Predict-then-Optimize (SPO) loss, and we propose a tractable methodology called SPO Trees (SPOTs) for training decision trees under this loss. SPOTs benefit from the interpretability of decision trees, providing an interpretable segmentation of contextual features into groups with distinct optimal solutions to the optimization problem of interest. We conduct several numerical experiments on synthetic and real data including the prediction of travel times for shortest path problems and predicting click probabilities for news article recommendation. We demonstrate on these datasets that SPOTs simultaneously provide higher quality decisions and significantly lower model complexity than other machine learning approaches (e.g., CART) trained to minimize prediction error.




Has companion code repository: https://github.com/rtm2130/SPOTree








This page was built for publication: Decision Trees for Decision-Making under the Predict-then-Optimize Framework

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6335892)