Accurate segmentation is a crucial step in medical image analysis and applying supervised machine learning to segment the organs or lesions has been substantiated effective. However, it is costly to perform data annotation that provides ground truth labels for training the supervised algorithms, and the high variance of data that comes from different domains tends to severely degrade system performance over cross-site or cross-modality datasets. To mitigate this problem, a novel unsupervised domain adaptation (UDA) method named dispensed Transformer network (DTNet) is introduced in this paper. Our novel DTNet contains three modules. First, a dispensed residual transformer block is designed, which realizes global attention by dispensed interleaving operation and deals with the excessive computational cost and GPU memory usage of the Transformer. Second, a multi-scale consistency regularization is proposed to alleviate the loss of details in the low-resolution output for better feature alignment. Finally, a feature ranking discriminator is introduced to automatically assign different weights to domain-gap features to lessen the feature distribution distance, reducing the performance shift of two domains. The proposed method is evaluated on large fluorescein angiography (FA) retinal nonperfusion (RNP) cross-site dataset with 676 images and a wide used cross-modality dataset from the MM-WHS challenge. Extensive results demonstrate that our proposed network achieves the best performance in comparison with several state-of-the-art techniques.
翻译:准确的分解是医学图像分析的关键一步,将监督的机器学习应用到器官或损伤的部位已经证实是有效的,但是,进行数据注释,为培训受监督的算法提供地面真相标签,而不同领域的数据差异很大,往往严重降低跨地点或跨模式数据集的系统性能。为缓解这一问题,本文件引入了一种名为分配变换器网络(DTNet)的新型不受监督的域适应(UDA)方法。我们的新版DTNet包含三个模块。首先,设计了一个分流的剩余变压器区块,通过分配互换操作和处理过度计算成本和变压器的GPU内存使用,实现全球关注。第二,提出一个多尺度的一致性调整,以缓解低分辨率输出中的细节损失,以更好地调和特征。最后,引入了一种地位分级区分器,将不同重量的域块分布特性自动分配到降低地段分布距离,减少两个域的性能变化。首先设计了一个分置式变换式变式变式变式变式变式变式,通过分配操作实现全球注意力,并处理过度计算变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式图。SFAFAS系统,用了大数据的系统数据,在大规模变式变式变式变式变式变制了大式式图,用数据,用了大式变式变式变制制制制制制制制制制制制制制制制式制制制制制制制制制制制制制制制制制制式制式制制制制制制式制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制式制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制制式制制制制制制制制制制制制制制制制制制制制制制制制制制制制制式制式制制制制