The soft Dice loss (SDL) has taken a pivotal role in many automated segmentation pipelines in the medical imaging community. Over the last years, some reasons behind its superior functioning have been uncovered and further optimizations have been explored. However, there is currently no implementation that supports its direct use in settings with soft labels. Hence, a synergy between the use of SDL and research leveraging the use of soft labels, also in the context of model calibration, is still missing. In this work, we introduce Dice semimetric losses (DMLs), which (i) are by design identical to SDL in a standard setting with hard labels, but (ii) can be used in settings with soft labels. Our experiments on the public QUBIQ, LiTS and KiTS benchmarks confirm the potential synergy of DMLs with soft labels (e.g. averaging, label smoothing, and knowledge distillation) over hard labels (e.g. majority voting and random selection). As a result, we obtain superior Dice scores and model calibration, which supports the wider adoption of DMLs in practice. Code is available at \href{https://github.com/zifuwanggg/JDTLosses}{https://github.com/zifuwanggg/JDTLosses}.
翻译:摘要:软Dice损失(SDL)在医学图像领域的许多自动分割流程中发挥了重要作用。在过去的几年中,它的优越性能背后的一些原因已经被揭示,并进一步探索了优化方法。 然而,目前还没有支持在软标签设置下直接使用SDL的实现。因此,在SDL和利用软标签的研究之间尚缺乏协同作用,尤其是在模型校准的背景下。在本文中,我们介绍了Dice半度量损失(DML),它(i)在标准的硬标签设置中设计上与SDL完全相同,但是(ii)可以在软标签设置中使用。我们在公共的QUBIQ、LiTS和KiTS基准测试中的实验验证了DML与软标签(例如平均值、标签平滑和知识蒸馏)相比硬标签(例如多数投票和随机选择)的潜在协同作用。因此,我们获得了优越的Dice分数和模型校准,支持在实践中更广泛地采用DML。代码可以在\href{https://github.com/zifuwanggg/JDTLosses}{https://github.com/zifuwanggg/JDTLosses}上获得。