The training of deep neural networks is typically carried out using some form of gradient descent, often with great success. However, existing non-asymptotic analyses of first-order optimization algorithms typically employ a gradient smoothness assumption that is too strong to be applicable in the case of deep neural networks. To address this, we propose an algorithm named duality structure gradient descent (DSGD) that is amenable to non-asymptotic performance analysis, under mild assumptions on the training set and network architecture. The algorithm can be viewed as a form of layer-wise coordinate descent, where at each iteration the algorithm chooses one layer of the network to update. The decision of what layer to update is done in a greedy fashion, based on a rigorous lower bound on the improvement of the objective function for each choice of layer. In the analysis, we bound the time required to reach approximate stationary points, in both the deterministic and stochastic settings. The convergence is measured in terms of a parameter-dependent family of norms that is derived from the network architecture and designed to confirm a smoothness-like property on the gradient of the training loss function. We empirically demonstrate the effectiveness of DSGD in several neural network training scenarios.
翻译:深神经网络的培训通常使用某种形式的梯度下降,往往非常成功;然而,目前对一阶优化算法进行的非非抽量分析通常使用一种梯度平稳的假设,这种假设对于深神经网络而言太强,无法适用。为了解决这个问题,我们提议一种名为双度结构梯度下降的算法(DSGD)的算法,该算法在对培训组和网络结构的轻度假设下,可以进行非抽量性绩效分析;该算法可被视为一种分层协调下降的一种形式,在每次迭代时,算法选择一个网络的更新层。根据对改进每一层的客观功能的严格较低约束,以贪婪的方式决定了哪个层的更新。在分析中,我们把达到大约固定点所需的时间捆绑在一起,在确定性和对网络结构和网络结构的轻度假设下,用一个参数独立的标准组合来衡量其趋同性,目的是确认培训损失功能的梯度。我们用实验性地展示了网络的一些模型的有效性。