Sparse Perturbations for Improved Convergence in Stochastic Zeroth-Order Optimization

From MaRDI portal
Publication:6341949

arXiv2006.01759MaRDI QIDQ6341949

Author name not available (Why is that?)

Publication date: 2 June 2020

Abstract: Interest in stochastic zeroth-order (SZO) methods has recently been revived in black-box optimization scenarios such as adversarial black-box attacks to deep neural networks. SZO methods only require the ability to evaluate the objective function at random input points, however, their weakness is the dependency of their convergence speed on the dimensionality of the function to be evaluated. We present a sparse SZO optimization method that reduces this factor to the expected dimensionality of the random perturbation during learning. We give a proof that justifies this reduction for sparse SZO optimization for non-convex functions without making any assumptions on sparsity of objective function or gradient. Furthermore, we present experimental results for neural networks on MNIST and CIFAR that show faster convergence in training loss and test accuracy, and a smaller distance of the gradient approximation to the true gradient in sparse SZO compared to dense SZO.




Has companion code repository: https://github.com/StatNLP/sparse_szo








This page was built for publication: Sparse Perturbations for Improved Convergence in Stochastic Zeroth-Order Optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6341949)