Deep Learning-based methods recently have achieved remarkable progress in medical image analysis, but heavily rely on massive amounts of labeled training data. Transfer learning from pre-trained models has been proposed as a standard pipeline on medical image analysis to address this bottleneck. Despite their success, the existing pre-trained models are mostly not tuned for multi-modal multi-task generalization in medical domains. Specifically, their training data are either from non-medical domain or in single modality, failing to attend to the problem of performance degradation with cross-modal transfer. Furthermore, there is no effort to explicitly extract multi-level features required by a variety of downstream tasks. To overcome these limitations, we propose Universal Model, a transferable and generalizable pre-trained model for 3D medical image analysis. A unified self-supervised learning scheme is leveraged to learn representations from multiple unlabeled source datasets with different modalities and distinctive scan regions. A modality invariant adversarial learning module is further introduced to improve the cross-modal generalization. To fit a wide range of tasks, a simple yet effective scale classifier is incorporated to capture multi-level visual representations. To validate the effectiveness of the Universal Model, we perform extensive experimental analysis on five target tasks, covering multiple imaging modalities, distinctive scan regions, and different analysis tasks. Compared with both public 3D pre-trained models and newly investigated 3D self-supervised learning methods, Universal Model demonstrates superior generalizability, manifested by its higher performance, stronger robustness and faster convergence. The pre-trained Universal Model is available at: \href{https://github.com/xm-cmic/Universal-Model}{https://github.com/xm-cmic/Universal-Model}.


翻译:最近,深入学习的方法在医学图像分析方面取得了显著进展,但严重依赖大量标签培训数据。从经过培训的模型中学习的转移已被提议为医学图像分析的标准管道,以解决这一瓶颈问题。尽管取得了成功,但现有的经过培训的模型大多没有适应医疗领域的多模式多任务常规化。具体地说,它们的培训数据要么来自非医疗领域,要么采用单一模式,未能处理跨模式传输的性能退化问题。此外,没有努力明确提取各种下游任务所需的多层次特征。为了克服这些局限性,我们提出了通用模型,这是用于3D医学图像分析的可转让和可普遍适用的预先培训模式。一个统一的自我监督学习机制,用于学习多种不同模式和独特扫描区域的无标签源数据集。一个变式对抗学习模块被进一步引入来改进跨模式的普及。为了适应广泛的任务,一个简单而有效的等级分类器被纳入到多层次的更高层次的通用视觉演示。我们提出了通用的3级模型,用来验证其具有不同模式性能的常规化分析。

0
下载
关闭预览

相关内容

Explanation:医学图像分析。 Publisher:Elsevier。 SIT: http://dblp.uni-trier.de/db/journals/mia/
Python图像处理,366页pdf,Image Operators Image Processing in Python
Hierarchically Structured Meta-learning
CreateAMind
26+阅读 · 2019年5月22日
Unsupervised Learning via Meta-Learning
CreateAMind
42+阅读 · 2019年1月3日
Hierarchical Imitation - Reinforcement Learning
CreateAMind
19+阅读 · 2018年5月25日
【推荐】全卷积语义分割综述
机器学习研究会
19+阅读 · 2017年8月31日
Image Segmentation Using Deep Learning: A Survey
Arxiv
44+阅读 · 2020年1月15日
W-net: Bridged U-net for 2D Medical Image Segmentation
Arxiv
19+阅读 · 2018年7月12日
VIP会员
相关VIP内容
Python图像处理,366页pdf,Image Operators Image Processing in Python
Top
微信扫码咨询专知VIP会员