Cross modal image syntheses is gaining significant interests for its ability to estimate target images of a different modality from a given set of source images,like estimating MR to MR, MR to CT, CT to PET etc, without the need for an actual acquisition.Though they show potential for applications in radiation therapy planning,image super resolution, atlas construction, image segmentation etc.The synthesis results are not as accurate as the actual acquisition.In this paper,we address the problem of multi modal image synthesis by proposing a fully convolutional deep learning architecture called the SynNet.We extend the proposed architecture for various input output configurations. And finally, we propose a structure preserving custom loss function for cross-modal image synthesis.We validate the proposed SynNet and its extended framework on BRATS dataset with comparisons against three state-of-the art methods.And the results of the proposed custom loss function is validated against the traditional loss function used by the state-of-the-art methods for cross modal image synthesis.
翻译:交叉图象合成正在获得重大利益,因为它能够估计与特定一组源图像不同的不同模式图像的目标图像,例如估计MR至MR、MR至CT、CT至PET等,而无需实际获取。 虽然它们显示出在辐射治疗规划、图像超分辨率、图集构造、图像分化等方面应用的潜力。 合成结果不如实际获取结果准确。 在本文件中,我们通过提出一个称为SynNet的全演深层学习结构来解决多模式图像合成问题。 我们扩展了各种输入输出配置的拟议结构。 最后,我们提出了一个结构,为跨模式图像合成保留习惯损失功能。 我们验证了拟议的Synet及其在BRATS数据集上的扩展框架,与三种最新艺术方法进行比较。 并且,根据最新模型图像合成方法使用的传统损失功能,对拟议的客户损失功能的结果进行了验证。