We consider the novel task of learning disentangled representations of object shape and appearance across multiple domains (e.g., dogs and cars). The goal is to learn a generative model that learns an intermediate distribution, which borrows a subset of properties from each domain, enabling the generation of images that did not exist in any domain exclusively. This challenging problem requires an accurate disentanglement of object shape, appearance, and background from each domain, so that the appearance and shape factors from the two domains can be interchanged. We augment an existing approach that can disentangle factors within a single domain but struggles to do so across domains. Our key technical contribution is to represent object appearance with a differentiable histogram of visual features, and to optimize the generator so that two images with the same latent appearance factor but different latent shape factors produce similar histograms. On multiple multi-domain datasets, we demonstrate our method leads to accurate and consistent appearance and shape transfer across domains.
翻译:我们认为,在多个领域(例如狗和汽车)中学习物体形状和外观的分解表达方式是一项新颖的任务。目标是学习一种基因模型,以学习中间分布方式,从每个领域借用一组属性,从而生成在任何领域都不存在的图像。这个具有挑战性的问题要求从每个领域对物体形状、外观和背景进行精确分离,这样两个领域的外观和形状因素可以互换。我们增加了一种现有的方法,可以将单一领域内的各种因素分解开来,但在不同领域进行这种斗争。我们的关键技术贡献是用不同视觉特征的直观图来显示物体外观,优化生成器,使两个具有相同潜在外观因素但不同潜在形状因素产生类似直方图的图像。在多个多域数据集中,我们展示了我们的方法可以导致准确和一致的外观,并形成跨领域的转移。