This paper proposes an algorithm (RMDA) for training neural networks (NNs) with a regularization term for promoting desired structures. RMDA does not incur computation additional to proximal SGD with momentum, and achieves variance reduction without requiring the objective function to be of the finite-sum form. Through the tool of manifold identification from nonlinear optimization, we prove that after a finite number of iterations, all iterates of RMDA possess a desired structure identical to that induced by the regularizer at the stationary point of asymptotic convergence, even in the presence of engineering tricks like data augmentation and dropout that complicate the training process. Experiments on training NNs with structured sparsity confirm that variance reduction is necessary for such an identification, and show that RMDA thus significantly outperforms existing methods for this task. For unstructured sparsity, RMDA also outperforms a state-of-the-art pruning method, validating the benefits of training structured NNs through regularization.
翻译:本文建议了用于培训神经网络的算法(RMDA),该算法有一个正规化的术语,用于促进理想的结构。RMDA并不在接近SGD的动力下进行额外计算,而是在不要求目标函数为有限和总和的情况下实现差异减少。通过非线性优化的多重识别工具,我们证明,经过一定数量的迭代,RMDA的所有迭代都拥有一个理想的结构,它与在固定的无药性融合点由常规化的调试器所引发的结构相同,即使存在使培训过程复杂化的数据增加和辍学等工程技巧。关于培训NDMA的实验证实,这种识别需要减少差异,并表明RMDA因此大大超出了现有的任务方法。对于不结构的调试,RMDA也超越了一种最先进的调试方法,通过正规化来验证培训结构的NNP的效益。