We present a deep learning strategy that enables, for the first time, contrast-agnostic semantic segmentation of completely unpreprocessed brain MRI scans, without requiring additional training or fine-tuning for new modalities. Classical Bayesian methods address this segmentation problem with unsupervised intensity models, but require significant computational resources. In contrast, learning-based methods can be fast at test time, but are sensitive to the data available at training. Our proposed learning method, SynthSeg, leverages a set of training segmentations (no intensity images required) to generate synthetic sample images of widely varying contrasts on the fly during training. These samples are produced using the generative model of the classical Bayesian segmentation framework, with randomly sampled parameters for appearance, deformation, noise, and bias field. Because each mini-batch has a different synthetic contrast, the final network is not biased towards any MRI contrast. We comprehensively evaluate our approach on four datasets comprising over 1,000 subjects and four types of MR contrast. The results show that our approach successfully segments every contrast in the data, performing slightly better than classical Bayesian segmentation, and three orders of magnitude faster. Moreover, even within the same type of MRI contrast, our strategy generalizes significantly better across datasets, compared to training using real images. Finally, we find that synthesizing a broad range of contrasts, even if unrealistic, increases the generalization of the neural network. Our code and model are open source at https://github.com/BBillot/SynthSeg.
翻译:我们提出了一种深层次的学习战略,首次使完全未经预处理的大脑MRI扫描能够进行对比性、不可知性、语义分解,而不需要额外的培训或微调来进行新模式。古典贝叶斯方法用未经监督的强度模型解决这种分解问题,但需要大量的计算资源。相反,基于学习的方法在测试时间可以快速,但对于培训数据敏感。我们提议的学习方法SynthSeg利用一套培训分解(不需要的强度图像)来生成在培训期间在飞翔中差异很大的对比的合成样本。这些样本是使用古典巴伊斯分解框架的基因模型制作的,对外观、变形、噪音和偏差领域进行随机抽样分析。由于每个微型分解的参数在测试时间上可以比较不同,最后的网络并不偏向任何MRI的对比。我们全面评价了由1,000多个主题和四种MRB对比组成的四个模型集的方法。结果显示,我们的方法在数据中的每一对比中都成功地进行了对比,甚至比古典Bayesian分解的比较,比我们整个网络的比较了稍好一些。