We propose a novel attention gate (AG) model for medical imaging that automatically learns to focus on target structures of varying shapes and sizes. Models trained with AGs implicitly learn to suppress irrelevant regions in an input image while highlighting salient features useful for a specific task. This enables us to eliminate the necessity of using explicit external tissue/organ localisation modules of cascaded convolutional neural networks (CNNs). AGs can be easily integrated into standard CNN architectures such as the U-Net model with minimal computational overhead while increasing the model sensitivity and prediction accuracy. The proposed Attention U-Net architecture is evaluated on two large CT abdominal datasets for multi-class image segmentation. Experimental results show that AGs consistently improve the prediction performance of U-Net across different datasets and training sizes while preserving computational efficiency. The code for the proposed architecture is publicly available.
翻译:我们建议一种新的医疗成像关注门(AG)模式,该模式自动学会关注不同形状和大小的目标结构。在AGs培训的模型隐含地学会用输入图像压制不相干区域,同时突出对具体任务有用的突出特征。这使我们能够消除使用串联神经神经网络(CNNs)的显性外部组织/组织本地化模块的必要性。AGs可以很容易地融入标准的CNN结构,如U-Net模型,其计算管理费用最小,同时提高模型的敏感性和预测准确性。拟议的注意U-Net结构在两个大型CT腹部数据集上进行了评估,用于多级图像分割。实验结果显示,AGs在保持计算效率的同时,不断改进U-Net在不同数据集和培训规模上的预测性能。拟议结构的代码是公开的。