The reliability of Deep Learning systems depends on their accuracy but also on their robustness against adversarial perturbations to the input data. Several attacks and defenses have been proposed to improve the performance of Deep Neural Networks under the presence of adversarial noise in the natural image domain. However, robustness in computer-aided diagnosis for volumetric data has only been explored for specific tasks and with limited attacks. We propose a new framework to assess the robustness of general medical image segmentation systems. Our contributions are two-fold: (i) we propose a new benchmark to evaluate robustness in the context of the Medical Segmentation Decathlon (MSD) by extending the recent AutoAttack natural image classification framework to the domain of volumetric data segmentation, and (ii) we present a novel lattice architecture for RObust Generic medical image segmentation (ROG). Our results show that ROG is capable of generalizing across different tasks of the MSD and largely surpasses the state-of-the-art under sophisticated adversarial attacks.
翻译:深层学习系统的可靠性取决于其准确性,但也取决于其对输入数据的对抗性干扰的稳健性。一些攻击和防御建议是为了在自然图像域存在对抗性噪音的情况下改善深神经网络的性能。然而,计算机辅助的体积数据诊断的稳健性只针对具体任务和有限的攻击进行了探索。我们提出了一个新的框架来评估一般医学图像分割系统的稳健性。我们的贡献有两个方面:(一) 我们提出了一个新的基准,以评价医学分解 Decathlon (MSD) 的稳健性,将最近的AutoAttack自然图像分类框架扩大到体积数据分割领域,以及(二) 我们为ROBust通用医学图象分割提出了一个新的拉蒂氏结构。我们的结果显示,ROG能够将MSD的不同任务普遍化,并大大超过尖端对抗性攻击下的最新技术。