We examine gradient descent on unregularized logistic regression problems, with homogeneous linear predictors on linearly separable datasets. We show the predictor converges to the direction of the max-margin (hard margin SVM) solution. The result also generalizes to other monotone decreasing loss functions with an infimum at infinity, to multi-class problems, and to training a weight layer in a deep network in a certain restricted setting. Furthermore, we show this convergence is very slow, and only logarithmic in the convergence of the loss itself. This can help explain the benefit of continuing to optimize the logistic or cross-entropy loss even after the training error is zero and the training loss is extremely small, and, as we show, even if the validation loss increases. Our methodology can also aid in understanding implicit regularization n more complex models and with other optimization methods.
翻译:我们研究的是不正规的后勤回归问题的梯度下降,在线性分离的数据集中存在同质线性线性预测器。我们展示了预测器与最大差值(硬差SVM)解决方案的方向相趋同。结果还概括了其他单质减缩损失功能,其最小值为无限值,多级问题,在某种限制环境下在深网络中培训一个重量层。此外,我们显示这种趋同非常缓慢,在损失本身的趋同上只有逻辑性。这可以帮助解释即使在培训错误为零,培训损失也极小,而且我们表明,即使验证损失增加,我们的方法也可以帮助理解隐含的、更复杂的模型和其他优化方法。