The notion of neural collapse refers to several emergent phenomena that have been empirically observed across various canonical classification problems. During the terminal phase of training a deep neural network, the feature embedding of all examples of the same class tend to collapse to a single representation, and the features of different classes tend to separate as much as possible. Neural collapse is often studied through a simplified model, called the unconstrained feature representation, in which the model is assumed to have "infinite expressivity" and can map each data point to any arbitrary representation. In this work, we propose a more realistic variant of the unconstrained feature representation that takes the limited expressivity of the network into account. Empirical evidence suggests that the memorization of noisy data points leads to a degradation (dilation) of the neural collapse. Using a model of the memorization-dilation (M-D) phenomenon, we show one mechanism by which different losses lead to different performances of the trained network on noisy data. Our proofs reveal why label smoothing, a modification of cross-entropy empirically observed to produce a regularization effect, leads to improved generalization in classification tasks.
翻译:神经崩溃的概念常常通过简化模型来研究神经崩溃的概念,这个模型被假定具有“ 无限的表达性”, 并且可以绘制每个数据指向任意的表达方式。 在这项工作中,我们提出了一个比较现实的外观模型, 它将网络的有限表达性考虑在内。 经验性证据表明, 噪音数据点的记忆化会导致神经崩溃的退化( 关系 ) 。 我们使用一个记忆- 差异( M- D) 现象的模型, 我们展示了一种机制, 不同的损失导致经过训练的网络在噪音数据方面的不同性能。 我们的证据揭示了为什么在标签上标出平稳, 改变经过实验观察到的跨作物性特征, 从而产生常规化效果。