Domain shift is one of the most salient challenges in medical computer vision. Due to immense variability in scanners' parameters and imaging protocols, even images obtained from the same person and the same scanner could differ significantly. We address variability in computed tomography (CT) images caused by different convolution kernels used in the reconstruction process, the critical domain shift factor in CT. The choice of a convolution kernel affects pixels' granularity, image smoothness, and noise level. We analyze a dataset of paired CT images, where smooth and sharp images were reconstructed from the same sinograms with different kernels, thus providing identical anatomy but different style. Though identical predictions are desired, we show that the consistency, measured as the average Dice between predictions on pairs, is just 0.54. We propose Filtered Back-Projection Augmentation (FBPAug), a simple and surprisingly efficient approach to augment CT images in sinogram space emulating reconstruction with different kernels. We apply the proposed method in a zero-shot domain adaptation setup and show that the consistency boosts from 0.54 to 0.92 outperforming other augmentation approaches. Neither specific preparation of source domain data nor target domain data is required, so our publicly released FBPAug can be used as a plug-and-play module for zero-shot domain adaptation in any CT-based task.
翻译:由于扫描仪参数和成像协议的巨大变化,甚至从同一个人那里获得的图像也可能大相径庭。我们处理重建过程中使用的不同共变内核、CT的关键领域变化系数造成的计算断层图像的变异性。选择卷转内核会影响像素的颗粒度、图像光滑度和噪音水平。我们分析一组配对的CT图像数据集,用不同的内核从相同的罪状图中重建平滑和锐利的图像,从而提供相同的解剖学但不同的风格。虽然我们期望的是相同的预测,但我们表明,以对配对预测中的平均Dice衡量的对数图像的变异性只是0.54。我们建议采用过滤后预测增强像素粒度的方法(FBPAug),这是一种简单而令人惊讶的高效方法,用不同内核图像模拟空间重建时,用平滑和尖锐的图像从相同的直线图中重建,从而提供相同的解但风格的图像。我们提出的域域域域图调方法,从0.5-BPF的平面模型可以用来将一致性提升为0.14号特定数据格式。