The phenomenon of benign overfitting, where a predictor perfectly fits noisy training data while attaining low expected loss, has received much attention in recent years, but still remains not fully understood beyond simple linear regression setups. In this paper, we show that for regression, benign overfitting is "biased" towards certain types of problems, in the sense that its existence on one learning problem excludes its existence on other learning problems. On the negative side, we use this to argue that one should not expect benign overfitting to occur in general, for several natural extensions of the plain linear regression problems studied so far. We then turn to classification problems, and show that the situation there is much more favorable. Specifically, we consider a model where an arbitrary input distribution of some fixed dimension $k$ is concatenated with a high-dimensional distribution, and prove that the max-margin predictor (to which gradient-based methods are known to converge in direction) is asymptotically biased towards minimizing the expected *squared hinge loss* w.r.t. the $k$-dimensional distribution. This allows us to reduce the question of benign overfitting in classification to the simpler question of whether this loss is a good surrogate for the prediction error, and use it to show benign overfitting in some new settings.
翻译:良性过度装配现象 — 在一个预测或完全符合杂乱的培训数据同时达到预期的低损失水平 — 近些年来受到了很多关注,但除了简单的线性回归设置之外,仍然没有得到完全理解。 在本文中,我们表明,对于回归而言,良性过度装配是“偏向”某些类型的问题,因为在一个学习问题上存在它就排除了它在其他学习问题上的存在。在消极的方面,我们用它来论证,对于迄今为止研究的浅线性回归问题的若干自然延伸,人们不应期望出现良性过度装配。我们接着转向分类问题,并表明那里的情况更加有利。具体地说,我们考虑的模型是,某些固定尺寸(美元)的任意输入分配与高维分布是“偏差”的,并证明最大误差预测(人们知道梯度方法会向方向趋近)对于最大限度地减少预期的*qualed 线性损失* w.r.t. 美元-维分配来说,我们不应期望。这使我们能够减少在某种精确的预测中过度使用这种精确的测算问题,是否超越了新的错误。