In this paper we study consensus-based optimization (CBO), a versatile, flexibel and customizable optimization method suitable for performing nonconvex and nonsmooth global optimizations in high dimensions. CBO is a multi-particle metaheuristic, which is effective in various applications and at the same time amenable to theoretical analysis thanks to its minimalistic design. The underlying dynamics, however, is flexible enough to incorporate different mechanisms widely used in evolutionary computation and machine learning, as we show by analyzing a variant of CBO which makes use of memory effects and gradient information. We rigorously prove that this dynamics converges to a global minimizer of the objective function in mean-field law for a vast class of functions under minimal assumptions on the initialization of the method. The proof in particular reveals how to leverage further, in some applications advantageous, forces in the dynamics without loosing provable global convergence. To demonstrate the benefit of the herein investigated memory effects and gradient information in certain applications, we present numerical evidence for the superiority of this CBO variant in applications such as machine learning and compressed sensing, which en passant widen the scope of applications of CBO.
翻译:在本文中,我们研究的是基于共识的优化(CBO),这是一种适合于在高维方面进行非convex和不光滑的全球优化的多功能、灵活和定制的优化方法;CBO是一种多粒子计量经济学,在各种应用中有效,同时由于其最小化的设计而可进行理论分析;然而,其内在动态足够灵活,足以纳入在进化计算和机器学习中广泛使用的不同机制,我们通过分析利用记忆效应和梯度信息的CBO变种,表明了这一点;我们严格证明,这种动态与在对方法初始化的最低限度假设下,对大量类型的功能,平均法中目标功能的全球最小化作用汇合在一起;具体证据表明,在某些应用中,如何进一步利用动态中的有利力量,而不会使全球趋同;为了证明本文所调查的记忆效应以及某些应用中的梯度信息的好处,我们提供了数字证据,说明CBO变种在诸如机器学习和压缩感测等应用中的优越性,这种应用使CBO应用的范围扩大。