Preference Exploration for Efficient Bayesian Optimization with Multiple Outcomes

From MaRDI portal
Publication:6394315

arXiv2203.11382MaRDI QIDQ6394315

Author name not available (Why is that?)

Publication date: 21 March 2022

Abstract: We consider Bayesian optimization of expensive-to-evaluate experiments that generate vector-valued outcomes over which a decision-maker (DM) has preferences. These preferences are encoded by a utility function that is not known in closed form but can be estimated by asking the DM to express preferences over pairs of outcome vectors. To address this problem, we develop Bayesian optimization with preference exploration, a novel framework that alternates between interactive real-time preference learning with the DM via pairwise comparisons between outcomes, and Bayesian optimization with a learned compositional model of DM utility and outcomes. Within this framework, we propose preference exploration strategies specifically designed for this task, and demonstrate their performance via extensive simulation studies.




Has companion code repository: https://github.com/facebookresearch/preference-exploration








This page was built for publication: Preference Exploration for Efficient Bayesian Optimization with Multiple Outcomes

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6394315)