The reparameterization trick for acquisition functions

From MaRDI portal
Publication:6294699

arXiv1712.00424MaRDI QIDQ6294699

Author name not available (Why is that?)

Publication date: 1 December 2017

Abstract: Bayesian optimization is a sample-efficient approach to solving global optimization problems. Along with a surrogate model, this approach relies on theoretically motivated value heuristics (acquisition functions) to guide the search process. Maximizing acquisition functions yields the best performance; unfortunately, this ideal is difficult to achieve since optimizing acquisition functions per se is frequently non-trivial. This statement is especially true in the parallel setting, where acquisition functions are routinely non-convex, high-dimensional, and intractable. Here, we demonstrate how many popular acquisition functions can be formulated as Gaussian integrals amenable to the reparameterization trick and, ensuingly, gradient-based optimization. Further, we use this reparameterized representation to derive an efficient Monte Carlo estimator for the upper confidence bound acquisition function in the context of parallel selection.




Has companion code repository: https://github.com/svedel/greattunes








This page was built for publication: The reparameterization trick for acquisition functions

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6294699)