In this paper, we propose a Dual Focal Loss (DFL) function, as a replacement for the standard cross entropy (CE) function to achieve a better treatment of the unbalanced classes in a dataset. Our DFL method is an improvement on the recently reported Focal Loss (FL) cross-entropy function, which proposes a scaling method that puts more weight on the examples that are difficult to classify over those that are easy. However, the scaling parameter of FL is empirically set, which is problem-dependent. In addition, like other CE variants, FL only focuses on the loss of true classes. Therefore, no loss feedback is gained from the false classes. Although focusing only on true examples increases probability on true classes and correspondingly reduces probability on false classes due to the nature of the softmax function, it does not achieve the best convergence due to avoidance of the loss on false classes. Our DFL method improves on the simple FL in two ways. Firstly, it takes the idea of FL to focus more on difficult examples than the easy ones, but evaluates loss on both true and negative classes with equal importance. Secondly, the scaling parameter of DFL has been made learnable so that it can tune itself by backpropagation rather than being dependent on manual tuning. In this way, our proposed DFL method offers an auto-tunable loss function that can reduce the class imbalance effect as well as put more focus on both true difficult examples and negative easy examples.
翻译:在本文中,我们提出一个双重焦点损失(DFL)功能,作为标准交叉变式(CE)功能的替代,以在数据集中更好地处理不平衡类。我们的DLL方法改进了最近报告的Centle Lost(FL)交叉肾脏功能。我们建议了一种规模化方法,将难以分类的示例放在容易分类的示例上。然而,FL的缩放参数是根据经验设定的,这取决于问题。此外,与其他CE变方一样,FL只侧重于真实类的损失。因此,没有从错误类中获得简单的损失反馈。尽管我们只关注真实类的概率,并相应降低错误类的概率,但由于软体型功能的性质,它并没有实现最佳的趋同,因为避免了错误类的损失。然而,我们的DLFL法的缩放参数以两种方式改进了简单的FL。首先,FL更注重于简单类的难选例,而是评估真实类和负类的损失,其重要性相同。第二,由于对DFLFL的缩放法的缩缩缩度,因此可以学习DFLF的缩放方法本身的缩缩放法,而不是方向的缩放法。