Gradient regularization (GR) is a method that penalizes the gradient norm of the training loss during training. While some studies have reported that GR can improve generalization performance, little attention has been paid to it from the algorithmic perspective, that is, the algorithms of GR that efficiently improve the performance. In this study, we first reveal that a specific finite-difference computation, composed of both gradient ascent and descent steps, reduces the computational cost of GR. Next, we show that the finite-difference computation also works better in the sense of generalization performance. We theoretically analyze a solvable model, a diagonal linear network, and clarify that GR has a desirable implicit bias to so-called rich regime and finite-difference computation strengthens this bias. Furthermore, finite-difference GR is closely related to some other algorithms based on iterative ascent and descent steps for exploring flat minima. In particular, we reveal that the flooding method can perform finite-difference GR in an implicit way. Thus, this work broadens our understanding of GR for both practice and theory.
翻译:渐进式正规化(GR)是惩罚培训过程中培训损失的梯度标准的一种方法。虽然一些研究报告说,GR可以改进一般化表现,但从算法角度,即有效提高绩效的GR算法,很少注意它。在本研究中,我们首先发现,由梯度升降和降级步骤组成的特定有限差异计算方法降低了GR的计算成本。接下来,我们表明,从概括性表现的意义上来说,有限差异计算方法也效果更好。我们理论上分析一种可溶性模型,一种对等线性网络,并澄清GR对所谓的富集制度和有限差异计算具有可取的隐含偏见,加强了这种偏见。此外,有限差异GR与基于迭接性升降和降级步骤探索公寓微型资产的其他算法密切相关。我们特别发现,洪水方法可以隐含地进行有限差异性GR。因此,这项工作扩大了我们对实践和理论两方面的GR的理解。