On the Convergence of Prior-Guided Zeroth-Order Optimization Algorithms
From MaRDI portal
Publication:6373411
arXiv2107.10110MaRDI QIDQ6373411
Author name not available (Why is that?)
Publication date: 21 July 2021
Abstract: Zeroth-order (ZO) optimization is widely used to handle challenging tasks, such as query-based black-box adversarial attacks and reinforcement learning. Various attempts have been made to integrate prior information into the gradient estimation procedure based on finite differences, with promising empirical results. However, their convergence properties are not well understood. This paper makes an attempt to fill up this gap by analyzing the convergence of prior-guided ZO algorithms under a greedy descent framework with various gradient estimators. We provide a convergence guarantee for the prior-guided random gradient-free (PRGF) algorithms. Moreover, to further accelerate over greedy descent methods, we present a new accelerated random search (ARS) algorithm that incorporates prior information, together with a convergence analysis. Finally, our theoretical results are confirmed by experiments on several numerical benchmarks as well as adversarial attacks.
Has companion code repository: https://github.com/csy530216/pg-zoo
This page was built for publication: On the Convergence of Prior-Guided Zeroth-Order Optimization Algorithms
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6373411)