Driven by the empirical success and wide use of deep neural networks, understanding the generalization performance of overparameterized models has become an increasingly popular question. To this end, there has been substantial effort to characterize the implicit bias of the optimization algorithms used, such as gradient descent (GD), and the structural properties of their preferred solutions. This paper answers an open question in this literature: For the classification setting, what solution does mirror descent (MD) converge to? Specifically, motivated by its efficient implementation, we consider the family of mirror descent algorithms with potential function chosen as the $p$-th power of the $\ell_p$-norm, which is an important generalization of GD. We call this algorithm $p$-$\textsf{GD}$. For this family, we characterize the solutions it obtains and show that it converges in direction to a generalized maximum-margin solution with respect to the $\ell_p$-norm for linearly separable classification. While the MD update rule is in general expensive to compute and perhaps not suitable for deep learning, $p$-$\textsf{GD}$ is fully parallelizable in the same manner as SGD and can be used to train deep neural networks with virtually no additional computational overhead. Using comprehensive experiments with both linear and deep neural network models, we demonstrate that $p$-$\textsf{GD}$ can noticeably affect the structure and the generalization performance of the learned models.
翻译:在经验成功和广泛使用深层神经网络的驱动下,理解过度量化模型的一般性能已成为越来越受欢迎的问题。 为此,我们做出了大量努力,以说明所使用的优化算法的隐含偏差,例如梯度下降(GD)及其首选解决方案的结构属性。本文回答了本文献中的一个未决问题:对于分类设置而言,什么解决办法是反光下降(MD)的趋同?具体地,由于它的高效实施,我们认为具有潜在功能的反光下沉算法家系系($_p$-norm,这是GD的一个重要概括化)。我们称之为这种精度算法的隐含偏差,例如梯度下降(GD)及其首选解决方案的结构性特性。我们把它描述成一个通用的最大偏差解决方案,与用于直线性分类的 $_p_p$-norm-nooral swormal 值值值值值值值值值值值值值($_p$-p$-text$-nom)值值值值值值值值值更新规则,也许不适合深度学习,这是GD($美元)一个重要的普通模型,我们称之为直线性模型,不能同时使用这个模型。