Image to image translation is the problem of transferring an image from a source domain to a different (but related) target domain. We present a new unsupervised image to image translation technique that leverages the underlying semantic information for object transfiguration and domain transfer tasks. Specifically, we present a generative adversarial learning approach that jointly translates images and labels from a source domain to a target domain. Our main technical contribution is an encoder-decoder based network architecture that jointly encodes the image and its underlying semantics and translates both individually to the target domain. Additionally, we propose object transfiguration and cross-domain semantic consistency losses that preserve semantic labels. Through extensive experimental evaluation, we demonstrate the effectiveness of our approach as compared to the state-of-the-art methods on unsupervised image-to-image translation, domain adaptation, and object transfiguration.
翻译:图像转换到图像转换的问题是将图像从源域传输到不同( 但相关) 目标域的问题。 我们为图像转换技术展示了一种新的未经监督的图像转换技术, 将基本语义信息用于对象转换和域转移任务。 具体地说, 我们提出了一个基因对抗学习方法, 将图像和标签从源域联合翻译到目标域。 我们的主要技术贡献是一个基于编码器- 解码器的网络架构, 将图像及其基本语义编码, 并单独翻译到目标域 。 此外, 我们提出了维护语义标签的物体转换和跨部语义一致性损失。 通过广泛的实验性评估, 我们展示了我们的方法相对于未经监控的图像到映像翻译、 域适应和 对象转换的最新方法的有效性 。