Garment transfer shows great potential in realistic applications with the goal of transfering outfits across different people images. However, garment transfer between images with heavy misalignments or severe occlusions still remains as a challenge. In this work, we propose Complementary Transfering Network (CT-Net) to adaptively model different levels of geometric changes and transfer outfits between different people. In specific, CT-Net consists of three modules: 1) A complementary warping module first estimates two complementary warpings to transfer the desired clothes in different granularities. 2) A layout prediction module is proposed to predict the target layout, which guides the preservation or generation of the body parts in the synthesized images. 3) A dynamic fusion module adaptively combines the advantages of the complementary warpings to render the garment transfer results. Extensive experiments conducted on DeepFashion dataset demonstrate that our network synthesizes high-quality garment transfer images and significantly outperforms the state-of-art methods both qualitatively and quantitatively.
翻译:发色网在现实应用中显示出巨大的潜力,目的是将服装移换到不同的人图像中。然而,在具有严重不匹配或严重隔离的图像之间,服装移位仍是一个挑战。在这项工作中,我们建议补充转移网络(CT-Net)以适应性的方式模拟不同程度的几何变化和不同人之间的转移。具体地说,CT-Net由三个模块组成:1)一个辅助扭曲模块首先估计两个互补的扭曲模式,以便在不同的颗粒中转移所需的服装。2)提出一个布局预测模块,以预测目标布局,指导合成图像中身体部分的保存或生成。3)一个动态融合模块,适应性地结合了辅助转换功能的优势,使服装转移结果成为成型。在深法西数据集上进行的广泛实验表明,我们的网络在质量和数量上都综合了高质量的服装转移图像,大大超越了最新方法。