We propose the particle dual averaging (PDA) method, which generalizes the dual averaging method in convex optimization to the optimization over probability distributions with quantitative runtime guarantee. The algorithm consists of an inner loop and outer loop: the inner loop utilizes the Langevin algorithm to approximately solve for a stationary distribution, which is then optimized in the outer loop. The method can thus be interpreted as an extension of the Langevin algorithm to naturally handle nonlinear functional on the probability space. An important application of the proposed method is the optimization of neural network in the mean field regime, which is theoretically attractive due to the presence of nonlinear feature learning, but quantitative convergence rate can be challenging to obtain. By adapting finite-dimensional convex optimization theory into the space of measures, we analyze PDA in regularized empirical / expected risk minimization, and establish quantitative global convergence in learning two-layer mean field neural networks under more general settings. Our theoretical results are supported by numerical simulations on neural networks with reasonable size.
翻译:我们建议了粒子双重平均法(PDA)方法,该方法将双平均法(Convex优化)概括为优化概率分布的双均法(PDA),在数量运行时间保证下优化概率分布。算法包括一个内环和外环:内环利用Langevin算法(Langevin算法)大致解决固定分布问题,然后在外环中优化。因此,该方法可以被解释为Langevin算法(Langevin算法)的延伸,自然处理概率空间的非线性功能。拟议方法的一个重要应用是在平均的实地制度中优化神经网络,由于存在非线性特征学习,在理论上具有吸引力,但数量趋同率可能难以获得。通过在计量空间中调整有限维康韦克斯优化理论,我们在常规经验/预期风险最小化中分析PDA,并在更一般环境中学习两层平均的外神经网络时建立量化的全球趋同。我们的理论结果得到合理规模神经网络数字模拟的支持。