Deep segmentation models that generalize to images with unknown appearance are important for real-world medical image analysis. Retraining models leads to high latency and complex pipelines, which are impractical in clinical settings. The situation becomes more severe for ultrasound image analysis because of their large appearance shifts. In this paper, we propose a novel method for robust segmentation under unknown appearance shifts. Our contribution is three-fold. First, we advance a one-stage plug-and-play solution by embedding hierarchical style transfer units into a segmentation architecture. Our solution can remove appearance shifts and perform segmentation simultaneously. Second, we adopt Dynamic Instance Normalization to conduct precise and dynamic style transfer in a learnable manner, rather than previously fixed style normalization. Third, our solution is fast and lightweight for routine clinical adoption. Given 400*400 image input, our solution only needs an additional 0.2ms and 1.92M FLOPs to handle appearance shifts compared to the baseline pipeline. Extensive experiments are conducted on a large dataset from three vendors demonstrate our proposed method enhances the robustness of deep segmentation models.
翻译:切换到外观不明的图像的深层分解模型对于真实世界的医疗图像分析很重要。 重新培训模型导致高延迟和复杂的管道,在临床环境中是不切实际的。 对于超声波图像分析来说,情况变得更为严峻, 因为它们的外观变化很大。 在本文中, 我们提出了在外观变化不明的情况下进行稳健分解的新方法。 我们的贡献是三重。 首先, 我们通过将等级风格传输器嵌入一个分解结构, 推进一个单阶段的插接和播放解决方案。 我们的解决方案可以同时消除外观变化和进行分解。 第二, 我们采用动态例常态化, 以可学习的方式进行精确和动态的风格转移, 而不是以前固定的风格正常化。 第三, 我们的解决方案是快速和轻量的常规临床应用。 鉴于 400* 400 图像输入, 我们的解决方案只需要增加0. 2m 和 1. 92M FLOPs 来管理外观变化, 与基线管道相比, 我们只需要一个大型的数据集, 进行广泛的实验。 来自三个供应商的大型数据集, 显示我们提出的方法可以加强深度分解模型的坚固。