In clinical settings, where acquisition conditions and patient populations change over time, continual learning is key for ensuring the safe use of deep neural networks. Yet most existing work focuses on convolutional architectures and image classification. Instead, radiologists prefer to work with segmentation models that outline specific regions-of-interest, for which Transformer-based architectures are gaining traction. The self-attention mechanism of Transformers could potentially mitigate catastrophic forgetting, opening the way for more robust medical image segmentation. In this work, we explore how recently-proposed Transformer mechanisms for semantic segmentation behave in sequential learning scenarios, and analyse how best to adapt continual learning strategies for this setting. Our evaluation on hippocampus segmentation shows that Transformer mechanisms mitigate catastrophic forgetting for medical image segmentation compared to purely convolutional architectures, and demonstrates that regularising ViT modules should be done with caution.
翻译:在临床环境中,获取条件和病人人数随时间而变化,持续学习是确保安全使用深神经网络的关键。然而,大多数现有工作都侧重于进化结构和图像分类。相反,放射学家倾向于与描述特定利益区域的分解模型合作,因为基于变异器的建筑正在获得牵引。变异器的自我注意机制可以减轻灾难性的遗忘,为更稳健的医疗图像分割开辟道路。在这项工作中,我们探索最近提出的语义分解变异器机制在连续学习情景中的行为方式,并分析为这一环境调整持续学习战略的最佳方式。我们对河马运动分化的评估表明,变异器机制可以减轻灾难性地忘记医疗图像分解的灾难性现象,而不是纯粹的变异结构,并表明,对VIT模块的常规化应当谨慎。