In this paper we study consensus-based optimization (CBO), which is a metaheuristic derivative-free optimization method that can globally minimize nonconvex nonsmooth functions and is amenable to theoretical analysis. Based on an experimentally supported intuition that CBO performs a gradient descent on the convex envelope of a given objective, we derive a novel technique for proving the convergence to the global minimizer in mean-field law for a rich class of objective functions. Our results unveil internal mechanisms of CBO that are responsible for the success of the method. Furthermore, we improve prior analyses by requiring minimal assumptions about the initialization of the method and by covering objectives that are merely locally Lipschitz continuous. As a by-product of our analysis, we establish a quantitative nonasymptotic Laplace principle, which may be of independent interest.
翻译:在本文中,我们研究了基于共识的优化(CBO),这是一种不使用美经衍生物的优化(CBO)方法,可以在全球范围内最大限度地减少非康韦克斯非光滑功能,并且可以进行理论分析。基于由实验支持的直觉,即CBO在一个特定目标的侧面包厢上会发生梯度下降,我们得出了一种新颖的技术,以证明一个丰富的客观功能类别在中位法中与全球最低限值的趋同。我们的结果揭示了CBO内部负责该方法成功的机制。此外,我们改进了先前的分析,要求对该方法的初始化进行最低限度的假设,并覆盖了仅仅在当地持续实现的Lipschitz目标。我们的分析的副产品是,我们确立了一个量化的不依赖性拉贝特原则,这可能是独立感兴趣的。