In this paper we study consensus-based optimization (CBO), which is a multi-agent metaheuristic derivative-free optimization method that can globally minimize nonconvex nonsmooth functions and is amenable to theoretical analysis. Based on an experimentally supported intuition that, on average, CBO performs a gradient descent of the squared Euclidean distance to the global minimizer, we devise a novel technique for proving the convergence to the global minimizer in mean-field law for a rich class of objective functions. The result unveils internal mechanisms of CBO that are responsible for the success of the method. In particular, we prove that CBO performs a convexification of a very large class of optimization problems as the number of optimizing agents goes to infinity. Furthermore, we improve prior analyses by requiring minimal assumptions about the initialization of the method and by covering objectives that are merely locally Lipschitz continuous. As a core component of this analysis, we establish a quantitative nonasymptotic Laplace principle, which may be of independent interest. From the result of CBO convergence in mean-field law, it becomes apparent that the hardness of any global optimization problem is necessarily encoded in the rate of the mean-field approximation, for which we provide a novel probabilistic quantitative estimate. The combination of these results allows to obtain global convergence guarantees of the numerical CBO method with provable polynomial complexity.
翻译:在本文中,我们研究了基于共识的优化(CBO),这是一个多试剂的计量经济学衍生衍生物无污染优化(CBO)方法,可以在全球范围内最大限度地减少非康维克斯非光滑功能,并进行理论分析。基于实验支持的直觉,即CBO平均可以从平方的Euclidean距离向全球最小化器的梯度下降,我们设计了一种新颖的技术来证明在平均法中与全球最小化器的趋同器的趋同器(CBO)。结果揭示了CBO对这种方法的成功负有责任的内部机制。特别是,我们证明,CBO在优化剂数量到无限性时,对非常大类型的优化问题进行了混杂化。此外,我们改进了先前的分析,要求对这种方法的初始化进行最低限度的假设,并覆盖了只是局部的Lipschitz持续的目标。作为这一分析的核心组成部分,我们确立了一个量化的、不可计量的拉比原则,这可能具有独立的兴趣。从CBO在中趋同的结果,我们证明,随着优化剂的优化剂的精确度,我们获得任何核心的量化方法的精确的组合。