In this paper, we propose the Cross-Domain Adversarial Auto-Encoder (CDAAE) to address the problem of cross-domain image inference, generation and transformation. We make the assumption that images from different domains share the same latent code space for content, while having separate latent code space for style. The proposed framework can map cross-domain data to a latent code vector consisting of a content part and a style part. The latent code vector is matched with a prior distribution so that we can generate meaningful samples from any part of the prior space. Consequently, given a sample of one domain, our framework can generate various samples of the other domain with the same content of the input. This makes the proposed framework different from the current work of cross-domain transformation. Besides, the proposed framework can be trained with both labeled and unlabeled data, which makes it also suitable for domain adaptation. Experimental results on data sets SVHN, MNIST and CASIA show the proposed framework achieved visually appealing performance for image generation task. Besides, we also demonstrate the proposed method achieved superior results for domain adaptation. Code of our experiments is available in https://github.com/luckycallor/CDAAE.
翻译:在本文中,我们建议跨域自动编码自动编码自动转换器(CDAAE)解决跨域图像推断、生成和转换的问题。我们假设不同域的图像在内容方面共享相同的潜在代码空间,同时具有不同的潜在代码空间。拟议框架可以将跨域数据映射为由内容部分和风格部分组成的潜在代码矢量。潜代矢量与先前的分布相匹配,以便我们能够从先前空间的任何部分生成有意义的样本。因此,如果有一个域样本,我们的框架可以生成与输入内容相同的不同域的多个样本。这样,拟议的框架就不同于当前跨域转换工作。此外,拟议的框架可以同时使用标签和无标签的数据加以培训,这也适合域的适应。SVHN、MNIST和CSIA等数据集的实验结果展示了为图像生成任务取得视觉吸引力的性能的拟议框架。此外,我们还演示了为域域调而实现的优优效果的拟议方法。我们的实验守则可在 http://GIA.CAR.A.CAL.A.CAL.A/ENSAL上查阅。