In this paper, we investigate the performance of two first-order optimization algorithms, obtained from forward Euler discretization of finite-time optimization flows. These flows are the rescaled-gradient flow (RGF) and the signed-gradient flow (SGF), and consist of non-Lipscthiz or discontinuous dynamical systems that converge locally in finite time to the minima of gradient-dominated functions. We propose an Euler discretization for these first-order finite-time flows, and provide convergence guarantees, in the deterministic and the stochastic setting. We then apply the proposed algorithms to academic examples, as well as deep neural networks training, where we empirically test their performances on the SVHN dataset. Our results show that our schemes demonstrate faster convergences against standard optimization alternatives.
翻译:在本文中,我们调查了从有限时间优化流的远端分解中获得的两种一阶优化算法的性能。这些算法是重新分级流(RGF)和经签署的分级流(SGF),由非利普西斯或不连续的动态系统组成,这些系统在有限的时间内在当地与梯度主导功能的微粒相融合。我们建议对这些一阶固定时间流采用分解法,并在确定性和随机化的环境下提供趋同保证。然后,我们将拟议的算法应用于学术范例,以及深神经网络培训,我们在那里对SVHN数据集的性能进行实验性测试。我们的结果显示,我们的计划比标准优化替代方法更快地表现出趋同性。