Recent self-supervised advances in medical computer vision exploit global and local anatomical self-similarity for pretraining prior to downstream tasks such as segmentation. However, current methods assume i.i.d. image acquisition, which is invalid in clinical study designs where follow-up longitudinal scans track subject-specific temporal changes. Further, existing self-supervised methods for medically-relevant image-to-image architectures exploit only spatial or temporal self-similarity and only do so via a loss applied at a single image-scale, with naive multi-scale spatiotemporal extensions collapsing to degenerate solutions. To these ends, this paper makes two contributions: (1) It presents a local and multi-scale spatiotemporal representation learning method for image-to-image architectures trained on longitudinal images. It exploits the spatiotemporal self-similarity of learned multi-scale intra-subject features for pretraining and develops several feature-wise regularizations that avoid collapsed identity representations; (2) During finetuning, it proposes a surprisingly simple self-supervised segmentation consistency regularization to exploit intra-subject correlation. Benchmarked in the one-shot segmentation setting, the proposed framework outperforms both well-tuned randomly-initialized baselines and current self-supervised techniques designed for both i.i.d. and longitudinal datasets. These improvements are demonstrated across both longitudinal neurodegenerative adult MRI and developing infant brain MRI and yield both higher performance and longitudinal consistency.
翻译:最近自我监督的医学计算机视觉进步利用全球和地方解剖自我相似性进行下游任务前的预培训,如分化。然而,目前的方法假定了i.d.图像获取,这在临床研究设计中是无效的,因为后续的纵向扫描跟踪跟踪特定主题的时间变化。此外,医学相关图像到图像结构的现有自监督方法仅利用空间或时间的自我相似性,并且仅通过在单一图像尺度上实施的损失来达到这一目的,而天真的多尺度的多尺度扩展向退化的神经神经系统过渡。为了达到这些目的,本文作出了两项贡献:(1) 它展示了一种本地和多尺度的图像到图像的模拟结构学习方法,通过对长视角图像进行训练,利用了多尺度的多尺度内部图像到图像结构的自我监督方法,并发展了避免了身份描述失常的一些特征性调整;(2) 在细微调整过程中,它提出了一种出乎意料的简单自上超尺度的跨尺度的分层结构一致性调整,以利用本层内部的更高层次对比性比,在一图表上设定了长期的成熟的自我调整的模型和自我调整的模型。