Although supervised learning has enabled high performance for image segmentation, it requires a large amount of labeled training data, which can be difficult to obtain in the medical imaging field. Self-supervised learning (SSL) methods involving pretext tasks have shown promise in overcoming this requirement by first pretraining models using unlabeled data. In this work, we evaluate the efficacy of two SSL methods (inpainting-based pretext tasks of context prediction and context restoration) for CT and MRI image segmentation in label-limited scenarios, and investigate the effect of implementation design choices for SSL on downstream segmentation performance. We demonstrate that optimally trained and easy-to-implement inpainting-based SSL segmentation models can outperform classically supervised methods for MRI and CT tissue segmentation in label-limited scenarios, for both clinically-relevant metrics and the traditional Dice score.
翻译:虽然有监督的学习使图像分解的性能很高,但它需要大量标记的培训数据,在医学成像领域可能难以获得。涉及托辞任务的自我监督的学习方法(SSL)在通过使用未贴标签数据的第一批预培训模型克服这一要求方面显示了希望。在这项工作中,我们评估了在有标签限制的情况下CT和MRI图像分解的两种SSL方法(基于背景预测和背景恢复的涂漆的借口任务)的功效,并调查了SSL实施设计选择对下游分解性功能的影响。我们证明,最理想的、较容易实施的基于SSL的基于SSL分解模型能够超越在受标签限制的情况下MRI和CT组织分解的典型监督方法,包括临床相关计量和传统的Dice评分。