Modern deep neural networks have achieved impressive performance on tasks from image classification to natural language processing. Surprisingly, these complex systems with massive amounts of parameters exhibit the same structural properties in their last-layer features and classifiers across canonical datasets when training until convergence. In particular, it has been observed that the last-layer features collapse to their class-means, and those class-means are the vertices of a simplex Equiangular Tight Frame (ETF). This phenomenon is known as Neural Collapse ($\mathcal{NC}$). Recent papers have theoretically shown that $\mathcal{NC}$ emerges in the global minimizers of training problems with the simplified ``unconstrained feature model''. In this context, we take a step further and prove the $\mathcal{NC}$ occurrences in deep linear networks for the popular mean squared error (MSE) and cross entropy (CE) losses, showing that global solutions exhibit $\mathcal{NC}$ properties across the linear layers. Furthermore, we extend our study to imbalanced data for MSE loss and present the first geometric analysis of $\mathcal{NC}$ under bias-free setting. Our results demonstrate the convergence of the last-layer features and classifiers to a geometry consisting of orthogonal vectors, whose lengths depend on the amount of data in their corresponding classes. Finally, we empirically validate our theoretical analyses on synthetic and practical network architectures with both balanced and imbalanced scenarios.
翻译:令人惊讶的是,这些具有大量参数的复杂系统在最后一级的特点中表现出同样的结构属性,并且在培训到趋同之前,在整个卡通数据集中分类。特别是,人们发现,在培训到趋同之前,最后一个层的特点会崩溃到他们的阶级角度,而这些类值是简单度角框架的顶点。这种现象被称为神经性平衡(mathcal{NC}$) 。最近的文件理论上表明,在“不受限制的特征模型”简化后,在培训问题全球最小化中,美元在“不受限制的”模型中显示出同样的结构特性。在这方面,我们进一步迈出一步,证明美元在深度线性网络中,美元是普通平均正方差(MSE)和跨光线性框架(CE)的损失。这表明,全球解决方案在线性层次上显示出了美元平衡的逻辑值。此外,我们把我们的研究扩大到了在“不受限制的”卡通度轨道模型中, 以及目前磁性数据序列中, 也显示了我们目前磁性货币级的地理结构中, 最终数据损失和测量结果。