Anatomical structures such as blood vessels in contrast-enhanced CT (ceCT) images can be challenging to segment due to the variability in contrast medium diffusion. The combined use of ceCT and contrast-free (CT) CT images can improve the segmentation performances, but at the cost of a double radiation exposure. To limit the radiation dose, generative models could be used to synthesize one modality, instead of acquiring it. The CycleGAN approach has recently attracted particular attention because it alleviates the need for paired data that are difficult to obtain. Despite the great performances demonstrated in the literature, limitations still remain when dealing with 3D volumes generated slice by slice from unpaired datasets with different fields of view. We present an extension of CycleGAN to generate high fidelity images, with good structural consistency, in this context. We leverage anatomical constraints and automatic region of interest selection by adapting the Self-Supervised Body Regressor. These constraints enforce anatomical consistency and allow feeding anatomically-paired input images to the algorithm. Results show qualitative and quantitative improvements, compared to stateof-the-art methods, on the translation task between ceCT and CT images (and vice versa).
翻译:对比增强的CT(CCT)图像等解剖结构,如对比增强的CT(CECT)图像等血管结构,由于对比介质传播介质的变异性,对片段可能具有挑战性。合并使用CCT和无对比的CT(CT)图像可以改善分化性能,但以双重辐照为代价。为了限制辐射剂量,可以使用基因模型合成一种模式,而不是获得这种模式。循环GAN方法最近引起特别关注,因为它减少了对难以获得的配对数据的需求。尽管在文献中显示了巨大的性能,但在处理由不同观察领域的未定位数据集切片生成的3D量方面仍然存在限制。我们展示了循环GAN的扩展,以生成高正性、结构一致性高的图像。我们利用解剖限制和自动选择感兴趣的区域,调整了自超体反射器。这些制约使解剖一致性,并允许将解剖面输入的图像输入到算法中。结果显示质量和定量的改进,与反向变化的图像翻译方法相比较。