IoU losses are surrogates that directly optimize the Jaccard index. In semantic segmentation, leveraging IoU losses as part of the loss function is shown to perform better with respect to the Jaccard index measure than optimizing pixel-wise losses such as the cross-entropy loss alone. The most notable IoU losses are the soft Jaccard loss and the Lovasz-Softmax loss. However, these losses are incompatible with soft labels which are ubiquitous in machine learning. In this paper, we propose Jaccard metric losses (JMLs), which are identical to the soft Jaccard loss in a standard setting with hard labels, but are compatible with soft labels. With JMLs, we study two of the most popular use cases of soft labels: label smoothing and knowledge distillation. With a variety of architectures, our experiments show significant improvements over the cross-entropy loss on three semantic segmentation datasets (Cityscapes, PASCAL VOC and DeepGlobe Land), and our simple approach outperforms state-of-the-art knowledge distillation methods by a large margin. Code is available at: \href{https://github.com/zifuwanggg/JDTLosses}{https://github.com/zifuwanggg/JDTLosses}.
翻译:IoU(交并比)损失是直接优化Jaccard指数的替代品。在语义分割中,将IoU损失作为损失函数的一部分比仅优化像素值的损失(如交叉熵损失)在Jaccard指数测量方面表现更好。最显著的IoU损失是soft Jaccard损失和Lovasz-Softmax损失。然而,这些损失函数不兼容在机器学习中普遍使用的软标签。在本文中,我们提出Jaccard度量损失(JML),它们在硬标签的标准情况下与soft Jaccard损失相同,但对软标签兼容。通过JML,我们研究了软标签的两个最流行的用例:标签平滑和知识蒸馏。通过各种体系结构,我们的实验表明,在三个语义分割数据集(Cityscapes、PASCAL VOC和DeepGlobe Land)上,与交叉熵损失相比,我们的简单方法显著提高了性能,且超过了最先进的知识蒸馏方法很大的范围。代码可在以下网址中获得:\href{https://github.com/zifuwanggg/JDTLosses}{https://github.com/zifuwanggg/JDTLosses}。