Consensus-Based Optimization Methods Converge Globally
From MaRDI portal
Publication:6505079
arXiv2103.15130MaRDI QIDQ6505079
Massimo Fornasier, Konstantin Riedl, Timo Klock
Abstract: In this paper we study consensus-based optimization (CBO), which is a multi-agent metaheuristic derivative-free optimization method that can globally minimize nonconvex nonsmooth functions and is amenable to theoretical analysis. Based on an experimentally supported intuition that, on average, CBO performs a gradient descent of the squared Euclidean distance to the global minimizer, we devise a novel technique for proving the convergence to the global minimizer in mean-field law for a rich class of objective functions. The result unveils internal mechanisms of CBO that are responsible for the success of the method. In particular, we prove that CBO performs a convexification of a very large class of optimization problems as the number of optimizing agents goes to infinity. Furthermore, we improve prior analyses by requiring minimal assumptions about the initialization of the method and by covering objectives that are merely locally Lipschitz continuous. As a core component of this analysis, we establish a quantitative nonasymptotic Laplace principle, which may be of independent interest. From the result of CBO convergence in mean-field law, it becomes apparent that the hardness of any global optimization problem is necessarily encoded in the rate of the mean-field approximation, for which we provide a novel probabilistic quantitative estimate. The combination of these results allows to obtain global convergence guarantees of the numerical CBO method with provable polynomial complexity.
Has companion code repository: https://github.com/KonstantinRiedl/CBOGlobalConvergenceAnalysis
Could not fetch data.
This page was built for publication: Consensus-Based Optimization Methods Converge Globally
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6505079)