The neural collapse (NC) phenomenon describes an underlying geometric symmetry for deep neural networks, where both deeply learned features and classifiers converge to a simplex equiangular tight frame. It has been shown that both cross-entropy loss and mean square error can provably lead to NC. We remove NC's key assumption on the feature dimension and the number of classes, and then present a generalized neural collapse (GNC) hypothesis that effectively subsumes the original NC. Inspired by how NC characterizes the training target of neural networks, we decouple GNC into two objectives: minimal intra-class variability and maximal inter-class separability. We then use hyperspherical uniformity (which characterizes the degree of uniformity on the unit hypersphere) as a unified framework to quantify these two objectives. Finally, we propose a general objective -- hyperspherical uniformity gap (HUG), which is defined by the difference between inter-class and intra-class hyperspherical uniformity. HUG not only provably converges to GNC, but also decouples GNC into two separate objectives. Unlike cross-entropy loss that couples intra-class compactness and inter-class separability, HUG enjoys more flexibility and serves as a good alternative loss function. Empirical results show that HUG works well in terms of generalization and robustness.
翻译:通过超球形均匀性差填补神经网络崩塌的广义性和解耦
翻译后的摘要:
神经网络崩塌(NC)现象描述了深度神经网络的底层几何对称性,在这种情况下,深度学习的特征和分类器都收敛到一个等角紧框架,NC的假设关于特征维度和类数已被证明。我们移除了NC对特征维度和类数的关键假设,提出了一个广义神经崩塌(GNC)假设,有效地包括了原始NC。受NC表征神经网络训练目标的启示,我们将GNC分解为两个目标:最小化类内变异性和最大化类间可分性。然后,我们使用超球形均匀性(用于描述单位超球面上的均匀程度)作为一个统一的框架来量化这两个目标。最后,我们提出了一个通用目标——超球形均匀性差(HUG),它是类间和类内超球形均匀性之间的差异定义的。HUG不仅可以证明收敛到GNC,而且可以将GNC解耦成两个单独的目标。与耦合类内紧致性和类间可分性的交叉熵不同,HUG具有更大的灵活性,是一个良好的替代损失函数。实证结果表明,HUG在泛化和鲁棒性方面表现良好。