In this work, we provide a characterization of the feature-learning process in two-layer ReLU networks trained by gradient descent on the logistic loss following random initialization. We consider data with binary labels that are generated by an XOR-like function of the input features. We permit a constant fraction of the training labels to be corrupted by an adversary. We show that, although linear classifiers are no better than random guessing for the distribution we consider, two-layer ReLU networks trained by gradient descent achieve generalization error close to the label noise rate. We develop a novel proof technique that shows that at initialization, the vast majority of neurons function as random features that are only weakly correlated with useful features, and the gradient descent dynamics 'amplify' these weak, random features to strong, useful features.
翻译:在这项工作中,我们对在随机初始化后后勤损失方面由梯度下降培训的双层RELU网络的特征学习过程进行定性。 我们考虑输入特性的 XOR 类似功能产生的带有二进制标签的数据。 我们允许训练标签的固定部分被对手腐蚀。 我们显示,虽然线性分类器比随机猜测我们所考虑的分布法要好,但由梯度下降培训的双层RELU网络在标签噪声率上实现了普遍化错误。 我们开发了一种新颖的证明技术,显示在初始化时,绝大多数神经元的随机特性只是与有用特性关系不大,而梯度下沉动态“将这些薄弱的、随机特性加固、有用的特性加以补充” 。