U-Net and its extensions have achieved great success in medical image segmentation. However, due to the inherent local characteristics of ordinary convolution operations, U-Net encoder cannot effectively extract global context information. In addition, simple skip connections cannot capture salient features. In this work, we propose a fully convolutional segmentation network (CMU-Net) which incorporates hybrid convolutions and multi-scale attention gate. The ConvMixer module extracts global context information by mixing features at distant spatial locations. Moreover, the multi-scale attention gate emphasizes valuable features and achieves efficient skip connections. We evaluate the proposed method using both breast ultrasound datasets and a thyroid ultrasound image dataset; and CMU-Net achieves average Intersection over Union (IoU) values of 73.27% and 84.75%, and F1 scores of 84.81% and 91.71%. The code is available at https://github.com/FengheTan9/CMU-Net.
翻译:U-Net及其扩展在医学图像分割方面取得了巨大成功。 但是,由于普通变迁操作固有的本地特征, U-Net 编码器无法有效提取全球背景信息。 此外,简单的跳过连接无法捕捉显著特征。 在这项工作中,我们提议建立一个完全变异分割网络(CMU-Net),其中包含混合变异和多尺度关注门。ConvMixer模块通过在遥远的空间地点混合特征,提取全球背景信息。此外,多尺度关注门强调宝贵的特征,并实现高效跳过连接。我们用乳房超声波数据集和甲状体超声图像数据集对拟议方法进行评估;CMU-Net实现了73.27%和84.75%的中间值,以及84.81%和91.71%的F1分数。该代码可在https://github.com/FengheTan9/CMU-Net上查阅。