This paper introduces the DeepATLAS foundational model for localization tasks in the domain of high-dimensional biomedical data. Upon convergence of the proposed self-supervised objective, a pretrained model maps an input to an anatomically-consistent embedding from which any point or set of points (e.g., boxes or segmentations) may be identified in a one-shot or few-shot approach. As a representative benchmark, a DeepATLAS model pretrained on a comprehensive cohort of 51,000+ unlabeled 3D computed tomography exams yields high one-shot segmentation performance on over 50 anatomic structures across four different external test sets, either matching or exceeding the performance of a standard supervised learning model. Further improvements in accuracy can be achieved by adding a small amount of labeled data using either a semisupervised or more conventional fine-tuning strategy.
翻译:暂无翻译