Scalable First-Order Bayesian Optimization via Structured Automatic Differentiation

From MaRDI portal
Publication:6402321

arXiv2206.08366MaRDI QIDQ6402321

Author name not available (Why is that?)

Publication date: 16 June 2022

Abstract: Bayesian Optimization (BO) has shown great promise for the global optimization of functions that are expensive to evaluate, but despite many successes, standard approaches can struggle in high dimensions. To improve the performance of BO, prior work suggested incorporating gradient information into a Gaussian process surrogate of the objective, giving rise to kernel matrices of size ndimesnd for n observations in d dimensions. Na"ively multiplying with (resp. inverting) these matrices requires mathcalO(n2d2) (resp. mathcalO(n3d3)) operations, which becomes infeasible for moderate dimensions and sample sizes. Here, we observe that a wide range of kernels gives rise to structured matrices, enabling an exact mathcalO(n2d) matrix-vector multiply for gradient observations and mathcalO(n2d2) for Hessian observations. Beyond canonical kernel classes, we derive a programmatic approach to leveraging this type of structure for transformations and combinations of the discussed kernel classes, which constitutes a structure-aware automatic differentiation algorithm. Our methods apply to virtually all canonical kernels and automatically extend to complex kernels, like the neural network, radial basis function network, and spectral mixture kernels without any additional derivations, enabling flexible, problem-dependent modeling while scaling first-order BO to high d.




Has companion code repository: https://github.com/sebastianament/covariancefunctions.jl








This page was built for publication: Scalable First-Order Bayesian Optimization via Structured Automatic Differentiation

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6402321)