U-Net and its extended segmentation model have achieved great success in medical image segmentation tasks. However, due to the inherent local characteristics of ordinary convolution operations, the encoder cannot effectively extract the global context information. In addition, simple skip connection cannot capture salient features. In this work, we propose a full convolutional segmentation network (CMU-Net) which incorporate hybrid convolution and multi-scale attention gate. The ConvMixer module is to mix distant spatial locations for extracting the global context information. Moreover, the multi-scale attention gate can help to emphasize valuable features and achieve efficient skip connections. Evaluations on open-source breast ultrasound images and private thyroid ultrasound image datasets show that CMU-Net achieves an average IOU of 73.27% and 84.75%, F1-value is 84.16% and 91.71%. The code is available at https://github.com/FengheTan9/CMU-Net.
翻译:U-Net及其扩展分割模型在医疗图像分割任务方面取得了巨大成功,然而,由于普通卷发操作固有的本地特点,编码器无法有效地提取全球背景信息。此外,简单的跳过连接无法捕捉显著特征。在这项工作中,我们提议建立一个完整的卷发分割网络(CMU-Net),其中包括混合组合和多尺度关注门。ConvMixer模块将混合远程空间位置,以提取全球背景信息。此外,多尺度关注大门有助于强调有价值的特征并实现高效跳过连接。对开源乳腺癌超声波图像和私人甲状腺超声图像数据集的评估显示,CMUMU平均达到73.27%和84.75%,F1值为84.16%和91.71%。该代码可在 https://github.com/FengheTan9/CMU-Net上查阅。