Linear classifiers and leaky ReLU networks trained by gradient flow on the logistic loss have an implicit bias towards solutions which satisfy the Karush--Kuhn--Tucker (KKT) conditions for margin maximization. In this work we establish a number of settings where the satisfaction of these KKT conditions implies benign overfitting in linear classifiers and in two-layer leaky ReLU networks: the estimators interpolate noisy training data and simultaneously generalize well to test data. The settings include variants of the noisy class-conditional Gaussians considered in previous work as well as new distributional settings where benign overfitting has not been previously observed. The key ingredient to our proof is the observation that when the training data is nearly-orthogonal, both linear classifiers and leaky ReLU networks satisfying the KKT conditions for their respective margin maximization problems behave like a nearly uniform average of the training examples.
翻译:在这项工作中,我们建立了一些环境,使KKT条件的满意程度意味着线性分类器和两层的漏泄ReLU网络有良好的匹配:估计者将噪音培训数据相互调和,并同时对数据进行全面测试。这些设置包括以前工作中考虑的吵闹的等级条件级高标人变异,以及以前没有观察到良性过大的新分布环境。我们证据的关键要素是观察,当培训数据几乎是垂直的时,线性分类器和满足KKT条件的泄漏ReLU网络就各自的差值最大化问题表现为几乎统一的培训示例。</s>