While cross entropy (CE) is the most commonly used loss to train deep neural networks for classification tasks, many alternative losses have been developed to obtain better empirical performance. Among them, which one is the best to use is still a mystery, because there seem to be multiple factors affecting the answer, such as properties of the dataset, the choice of network architecture, and so on. This paper studies the choice of loss function by examining the last-layer features of deep networks, drawing inspiration from a recent line work showing that the global optimal solution of CE and mean-square-error (MSE) losses exhibits a Neural Collapse phenomenon. That is, for sufficiently large networks trained until convergence, (i) all features of the same class collapse to the corresponding class mean and (ii) the means associated with different classes are in a configuration where their pairwise distances are all equal and maximized. We extend such results and show through global solution and landscape analyses that a broad family of loss functions including commonly used label smoothing (LS) and focal loss (FL) exhibits Neural Collapse. Hence, all relevant losses(i.e., CE, LS, FL, MSE) produce equivalent features on training data. Based on the unconstrained feature model assumption, we provide either the global landscape analysis for LS loss or the local landscape analysis for FL loss and show that the (only!) global minimizers are neural collapse solutions, while all other critical points are strict saddles whose Hessian exhibit negative curvature directions either in the global scope for LS loss or in the local scope for FL loss near the optimal solution. The experiments further show that Neural Collapse features obtained from all relevant losses lead to largely identical performance on test data as well, provided that the network is sufficiently large and trained until convergence.
翻译:虽然交叉进化( CE) 是用来训练深度神经网络进行分类任务的最常用损失, 但许多替代损失已经开发出来, 以获得更好的实验性能。 其中, 最能使用的仍然是一个谜, 因为似乎有多种因素影响答案, 例如数据集的属性、 网络结构的选择等等。 本文研究损失功能的选择, 方法是通过检查深网络的最后一层特征, 从最近的一行工作中得到启示, 显示 CE 和 平均平方- 仪( MSE) 损失的全球最佳解决方案 表现出一种神经崩溃现象。 对于经过充分训练直到趋同的足够大的网络来说, 是一个足够大的网络崩溃的所有特征, 因为它们的平滑动( LS) 和平均平面( FLE) 损失, 用来进行最坏的模型分析。