The removal of non-brain signal from magnetic resonance imaging (MRI) data, known as skull-stripping, is an integral component of many neuroimage analysis streams. Despite their abundance, popular classical skull-stripping methods are usually tailored to images with specific acquisition properties, namely near-isotropic resolution and T1-weighted (T1w) MRI contrast, which are prevalent in research settings. As a result, existing tools tend to adapt poorly to other image types, such as stacks of thick slices acquired with fast spin-echo (FSE) MRI that are common in the clinic. While learning-based approaches for brain extraction have gained traction in recent years, these methods face a similar burden, as they are only effective for image types seen during the training procedure. To achieve robust skull-stripping across a landscape of imaging protocols, we introduce SynthStrip, a rapid, learning-based brain-extraction tool. By leveraging anatomical segmentations to generate an entirely synthetic training dataset with anatomies, intensity distributions, and artifacts that far exceed the realistic range of medical images, SynthStrip learns to successfully generalize to a variety of real acquired brain images, removing the need for training data with target contrasts. We demonstrate the efficacy of SynthStrip for a diverse set of image acquisitions and resolutions across subject populations, ranging from newborn to adult. We show substantial improvements in accuracy over popular skull-stripping baselines -- all with a single trained model. Our method and labeled evaluation data are available at https://w3id.org/synthstrip.
翻译:从磁共振成像(MRI)数据中去除非脑信号(称为头骨剥离),是许多神经图像分析流的一个组成部分。尽管其丰富性,但流行的古典头骨剥离方法通常适合具有特定获取属性的图像,即近同色分解和T1加权(T1w) MRI对比,在研究环境中十分普遍。因此,现有工具往往不适应其他图像类型,如诊所常见的通过快速脊椎剥离(FSE) MRI获得的厚切片堆叠件。尽管近年来,基于学习的脑提取方法获得了牵引力,但这些方法也面临着类似的负担,因为它们只对培训过程中看到的图像类型有效。为了在成像协议的景观中实现强有力的头骨剥离,我们引入了一个快速、基于学习的脑解析工具。通过利用解剖分解模型生成完全合成的、带有解剖、密集度分布以及远远超过现实的医学图像范围的工艺范围,这些方法面临着类似的负担。 SynSripretal 将我们所有经过训练的原始图像的精度的精度转化为图解的精度用于展示。