Despite the superior performance of Deep Learning (DL) on numerous segmentation tasks, the DL-based approaches are notoriously overconfident about their prediction with highly polarized label probability. This is often not desirable for many applications with the inherent label ambiguity even in human annotations. This challenge has been addressed by leveraging multiple annotations per image and the segmentation uncertainty. However, multiple per-image annotations are often not available in a real-world application and the uncertainty does not provide full control on segmentation results to users. In this paper, we propose novel methods to improve the segmentation probability estimation without sacrificing performance in a real-world scenario that we have only one ambiguous annotation per image. We marginalize the estimated segmentation probability maps of networks that are encouraged to under-/over-segment with the varying Tversky loss without penalizing balanced segmentation. Moreover, we propose a unified hypernetwork ensemble method to alleviate the computational burden of training multiple networks. Our approaches successfully estimated the segmentation probability maps that reflected the underlying structures and provided the intuitive control on segmentation for the challenging 3D medical image segmentation. Although the main focus of our proposed methods is not to improve the binary segmentation performance, our approaches marginally outperformed the state-of-the-arts. The codes are available at \url{https://github.com/sh4174/HypernetEnsemble}.
翻译:尽管Deep Learning(DL)在许多分解任务上表现优异,但基于 DL 的方法却对以高度分化标签概率极分化的标签预测结果的预测有惊人的自信,这在很多应用中往往不可取,因为标签本身含义模糊,甚至在人文注解中也是如此。这一挑战已经通过利用每个图像的多重说明和分解不确定性来解决。然而,在现实应用中往往没有多重的人均图解,而且不确定性无法为用户提供对分解结果的充分控制。在本文中,我们提出了改进分解概率估计的新方法,但又不牺牲了真实世界情景中的分解概率估计,因为在现实世界情景中,我们只能对每个图像作一个模糊的注解。我们将鼓励低/超分解与不同的Tversky 损失的网络估计分解概率图边缘化。此外,我们提出了一种统一的超网络共通度方法,以减轻培训多个网络的计算负担。我们的方法成功地估算了反映基本结构的分解概率图,并为具有挑战性的 3D 医学图解析断断提供了直观控制。尽管我们现有的分解方法是微分解方法,但改进了我们现有的分解方法不是改进了我们现有的分解方法。