Multi-domain image-to-image (I2I) translations can transform a source image according to the style of a target domain. One important, desired characteristic of these transformations, is their graduality, which corresponds to a smooth change between the source and the target image when their respective latent-space representations are linearly interpolated. However, state-of-the-art methods usually perform poorly when evaluated using inter-domain interpolations, often producing abrupt changes in the appearance or non-realistic intermediate images. In this paper, we argue that one of the main reasons behind this problem is the lack of sufficient inter-domain training data and we propose two different regularization methods to alleviate this issue: a new shrinkage loss, which compacts the latent space, and a Mixup data-augmentation strategy, which flattens the style representations between domains. We also propose a new metric to quantitatively evaluate the degree of the interpolation smoothness, an aspect which is not sufficiently covered by the existing I2I translation metrics. Using both our proposed metric and standard evaluation protocols, we show that our regularization techniques can improve the state-of-the-art multi-domain I2I translations by a large margin. Our code will be made publicly available upon the acceptance of this article.
翻译:多方向图像到映像( I2I) 翻译可以按照目标域的风格转换源图像。 这些转换的一个重要、 理想的特征之一是其渐进性, 与源和目标图像之间的平稳变化相对应, 当它们各自的潜层空间显示线性内插时, 这与源和目标图像之间的平稳变化相对应。 但是, 在使用内部内部插图进行评估时, 最先进的方法通常效果不佳, 通常会产生外观或非现实的中间图像的突然变化 。 在本文中, 我们争论说, 造成这一问题的主要原因之一是缺乏足够的部间培训数据, 我们提出两种不同的正规化方法来缓解这一问题: 新的缩小损失, 将潜在的空间捆绑在一起, 和混合数据放大战略, 以平整不同区域之间的风格表示。 我们还提出了一个新的定量评估内流光度度度度度度度度度度的新指标, 这个方面没有被现有的I2 翻译指标所充分覆盖。 使用我们提议的计量和标准评价协议, 我们展示了我们的正规化技术可以改善我们现有版本的版本。</s>