Editing hairstyle is unique and challenging due to the complexity and delicacy of hairstyle. Although recent approaches significantly improved the hair details, these models often produce undesirable outputs when a pose of a source image is considerably different from that of a target hair image, limiting their real-world applications. HairFIT, a pose-invariant hairstyle transfer model, alleviates this limitation yet still shows unsatisfactory quality in preserving delicate hair textures. To solve these limitations, we propose a high-performing pose-invariant hairstyle transfer model equipped with latent optimization and a newly presented local-style-matching loss. In the StyleGAN2 latent space, we first explore a pose-aligned latent code of a target hair with the detailed textures preserved based on local style matching. Then, our model inpaints the occlusions of the source considering the aligned target hair and blends both images to produce a final output. The experimental results demonstrate that our model has strengths in transferring a hairstyle under larger pose differences and preserving local hairstyle textures.
翻译:由于发型的复杂性和细腻性,编辑发型是独特而具有挑战性的。虽然最近的方法大大改善了发型细节,但这些模型往往产生不受欢迎的产出,而发源图像的外形与目标发型图像的外形大不相同,限制了其真实世界应用。HairFIT是一种假发型的发型转移模型,它缓解了这种限制,但在保存微妙的发型纹理方面仍然显示出质量不尽如人意。为了解决这些限制,我们提出了一种高性能的假发型转移模型,配有潜质优化和新推出的本地风格匹配损失。在StelegGAN2 潜在空间,我们首先探索一个目标发型与基于本地风格匹配保存的详细纹理的相相匹配的相容相容潜值代码。然后,我们的模型将源的分解点画成,考虑到对齐的目标发型和混合两种图像以产生最终产出。实验结果表明,我们的模型在根据更大的外形差异转移发型和保存本地发型纹理方面有优势。