Bayesian Optimization with Gradients

From MaRDI portal
Publication:6284266

arXiv1703.04389MaRDI QIDQ6284266

Author name not available (Why is that?)

Publication date: 13 March 2017

Abstract: Bayesian optimization has been successful at global optimization of expensive-to-evaluate multimodal objective functions. However, unlike most optimization methods, Bayesian optimization typically does not use derivative information. In this paper we show how Bayesian optimization can exploit derivative information to decrease the number of objective function evaluations required for good performance. In particular, we develop a novel Bayesian optimization algorithm, the derivative-enabled knowledge-gradient (dKG), for which we show one-step Bayes-optimality, asymptotic consistency, and greater one-step value of information than is possible in the derivative-free setting. Our procedure accommodates noisy and incomplete derivative information, comes in both sequential and batch forms, and can optionally reduce the computational cost of inference through automatically selected retention of a single directional derivative. We also compute the d-KG acquisition function and its gradient using a novel fast discretization-free technique. We show d-KG provides state-of-the-art performance compared to a wide range of optimization procedures with and without gradients, on benchmarks including logistic regression, deep learning, kernel learning, and k-nearest neighbors.




Has companion code repository: https://github.com/wujian16/Cornell-MOE








This page was built for publication: Bayesian Optimization with Gradients

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6284266)