Unsupervised image transfer enables intra- and inter-modality image translation in applications where a large amount of paired training data is not abundant. To ensure a structure-preserving mapping from the input to the target domain, existing methods for unpaired image transfer are commonly based on cycle-consistency, causing additional computational resources and instability due to the learning of an inverse mapping. This paper presents a novel method for uni-directional domain mapping that does not rely on any paired training data. A proper transfer is achieved by using a GAN architecture and a novel generator loss based on patch invariance. To be more specific, the generator outputs are evaluated and compared at different scales, also leading to an increased focus on high-frequency details as well as an implicit data augmentation. This novel patch loss also offers the possibility to accurately predict aleatoric uncertainty by modeling an input-dependent scale map for the patch residuals. The proposed method is comprehensively evaluated on three well-established medical databases. As compared to four state-of-the-art methods, we observe significantly higher accuracy on these datasets, indicating great potential of the proposed method for unpaired image transfer with uncertainty taken into account. Implementation of the proposed framework is released here: \url{https://github.com/anger-man/unsupervised-image-transfer-and-uq}.
翻译:在大量配对培训数据不丰富的应用软件中,未经监督的图像传输能够实现内部和内部的现代图像转换。为了确保从输入到目标域的结构保护映射,现有的未受保护图像传输方法通常以周期一致性为基础,造成额外的计算资源和不稳定,因为学习反映射而导致的不稳定。本文为单向域映射提供了一个不依赖任何配对培训数据的新颖方法。通过使用一个GAN架构和基于补丁差的新颖发电机损失实现适当的传输。更具体地说,对发电机产出进行了评估并进行了不同尺度的比较,还导致对高频细节的更多关注以及隐含的数据增强。这种新的补映射损失还有可能通过为补丁剩余部分建模一个依赖投入的缩放比例图来准确预测疏远的不确定性。在三个成熟的医疗数据库中,对拟议方法进行了全面评价。与四个“状态”方法相比,我们发现这些数据集的准确度要高得多,显示高得多的准确度,显示对高频/默认度框架的巨大潜力,并显示一个隐含的数据增强的数据添加的图像配置框架。