Medical image segmentation is one of the most fundamental tasks concerning medical information analysis. Various solutions have been proposed so far, including many deep learning-based techniques, such as U-Net, FC-DenseNet, etc. However, high-precision medical image segmentation remains a highly challenging task due to the existence of inherent magnification and distortion in medical images as well as the presence of lesions with similar density to normal tissues. In this paper, we propose TFCNs (Transformers for Fully Convolutional denseNets) to tackle the problem by introducing ResLinear-Transformer (RL-Transformer) and Convolutional Linear Attention Block (CLAB) to FC-DenseNet. TFCNs is not only able to utilize more latent information from the CT images for feature extraction, but also can capture and disseminate semantic features and filter non-semantic features more effectively through the CLAB module. Our experimental results show that TFCNs can achieve state-of-the-art performance with dice scores of 83.72\% on the Synapse dataset. In addition, we evaluate the robustness of TFCNs for lesion area effects on the COVID-19 public datasets. The Python code will be made publicly available on https://github.com/HUANGLIZI/TFCNs.
翻译:医学图像分解是医学信息分析的最根本任务之一,到目前为止已经提出了各种解决办法,包括许多深层次的学习技术,如U-Net、FC-DenseNet等。 然而,高精度医学图像分解由于医疗图像中存在固有的放大和扭曲,以及存在与正常组织密度相似的损伤,因此仍是一项极具挑战性的任务。在本文件中,我们建议TFCN(全面革命稠密网络的传输者)通过向FC-DenseNet引进ResLear-Transerferent(R-Transerfor)和Culvacial线性关注区(CLAB)来解决这一问题。TFCN不仅能够利用CT图像中更多的潜在信息来提取特征,而且还能够通过CLAB模块更有效地捕捉和传播语性特征和过滤非神经性特征。我们的实验结果表明,TFCN可以在Syapseion/SymblyDDS上以83CN-72 ⁇ 的分数实现状态。此外,我们在Synapse-DRIS数据设置上,我们还将评估CUTFTFI/LIS的可靠度。