Medical image translation (e.g. CT to MR) is a challenging task as it requires I) faithful translation of domain-invariant features (e.g. shape information of anatomical structures) and II) realistic synthesis of target-domain features (e.g. tissue appearance in MR). In this work, we propose Manifold Disentanglement Generative Adversarial Network (MDGAN), a novel image translation framework that explicitly models these two types of features. It employs a fully convolutional generator to model domain-invariant features, and it uses style codes to separately model target-domain features as a manifold. This design aims to explicitly disentangle domain-invariant features and domain-specific features while gaining individual control of both. The image translation process is formulated as a stylisation task, where the input is "stylised" (translated) into diverse target-domain images based on style codes sampled from the learnt manifold. We test MDGAN for multi-modal medical image translation, where we create two domain-specific manifold clusters on the manifold to translate segmentation maps into pseudo-CT and pseudo-MR images, respectively. We show that by traversing a path across the MR manifold cluster, the target output can be manipulated while still retaining the shape information from the input.
翻译:医学图像翻译(如CT到MR)是一项艰巨的任务,因为它需要I) 忠实翻译域变量特征(如解剖结构的形状信息) 和II) 目标域特征(如组织在MR中的外观) 现实合成(如组织外观) 。 在这项工作中,我们建议 Manifound Dismlement Exclementation General Adversarial Network (MDAAN), 这是一种新型图像翻译框架, 明确模拟这两类特征。 它使用完全同步的生成器来模拟域变量特性, 并且它使用风格代码来将不同模型目标域域域域域特性作为多元特性。 这个设计旨在明确区分域异特性和特定域域域域域域特性,同时获得对两者的单独控制。 图像翻译进程是一个星体化任务, 输入“ 螺旋化” (翻译), 以从所学的图中取样的风格代码为基础, 测试MDAAN 用于多式的医疗图像翻译。 我们用两个特定域多域组合组合组合将数据转换成折图解图解图图图图解, 并同时显示MRMRMLMLML 。