In this work we survey some recent results on the global minimization of a non-convex and possibly non-smooth high dimensional objective function by means of particle based gradient-free methods. Such problems arise in many situations of contemporary interest in machine learning and signal processing. After a brief overview of metaheuristic methods based on particle swarm optimization (PSO), we introduce a continuous formulation via second-order systems of stochastic differential equations that generalize PSO methods and provide the basis for their theoretical analysis. Subsequently, we will show how through the use of mean-field techniques it is possible to derive in the limit of large particles number the corresponding mean-field PSO description based on Vlasov-Fokker-Planck type equations. Finally, in the zero inertia limit, we will analyze the corresponding macroscopic hydrodynamic equations, showing that they generalize the recently introduced consensus-based optimization (CBO) methods by including memory effects. Rigorous results concerning the mean-field limit, the zero-inertia limit, and the convergence of the mean-field PSO method towards the global minimum are provided along with a suite of numerical examples.
翻译:在这项工作中,我们调查了最近通过粒子梯度法,在全球范围内最大限度地减少非阴道和可能非悬浮高维目标功能的一些最新结果,这些问题出现在对机器学习和信号处理具有当代兴趣的许多情况下。在简要概述基于粒子群温优化(PSO)的计量经济学方法之后,我们将采用通过二级系统连续配方的随机差异方程式系统,将PSO方法普遍化,并为它们的理论分析提供依据。随后,我们将展示如何通过使用平均场技术,在大型粒子数限度中得出基于Vlasov-Fokker-Planckk 类型方程式的相应平均值PSO描述。最后,在零惯性限度中,我们将分析相应的宏观流体动力方程式,表明它们通过包含记忆效果来概括最近采用的基于共识的优化方法。关于平均场限、零内层限和中位PSO方法与全球最低数值的趋同,提供了一套数字实例。