Partial voluming (PV) is arguably the last crucial unsolved problem in Bayesian segmentation of brain MRI with probabilistic atlases. PV occurs when voxels contain multiple tissue classes, giving rise to image intensities that may not be representative of any one of the underlying classes. PV is particularly problematic for segmentation when there is a large resolution gap between the atlas and the test scan, e.g., when segmenting clinical scans with thick slices, or when using a high-resolution atlas. In this work, we present PV-SynthSeg, a convolutional neural network (CNN) that tackles this problem by directly learning a mapping between (possibly multi-modal) low resolution (LR) scans and underlying high resolution (HR) segmentations. PV-SynthSeg simulates LR images from HR label maps with a generative model of PV, and can be trained to segment scans of any desired target contrast and resolution, even for previously unseen modalities where neither images nor segmentations are available at training. PV-SynthSeg does not require any preprocessing, and runs in seconds. We demonstrate the accuracy and flexibility of the method with extensive experiments on three datasets and 2,680 scans. The code is available at https://github.com/BBillot/SynthSeg.
翻译:部分挥发(PV)可以说是Bayesian脑部断裂中最后一个至关重要的未解决的问题,因为Bayesian 脑部MRI和概率图集是概率化的。当 voxels 含有多个组织类时,PV就会发生。当图集和测试扫描之间有巨大的分辨率差距时,PV就特别成分解问题。例如,当用厚片片分解临床扫描或使用高分辨率图集时,或者当使用高分辨率图集时,PV-SynthSeg(PV-SynthSeg),是一个革命性神经网络(CNN),通过直接了解(可能多模式)低分辨率(LR)扫描和基本高分辨率(HR)分解之间的映图来解决这个问题。 PV-SynSeg用光谱模型模拟HR标签图中的LRL图像,并且可以被训练对任何预期的目标对比和分辨率进行分解,即使以前没有的图像或分解模式,在培训中既不能提供图像或分解图像或分解图像-80神经-SyalS在2号中,也要求使用灵活度和广泛的数据系统。