Reconstruction of human clothing is an important task and often relies on intrinsic image decomposition. With a lack of domain-specific data and coarse evaluation metrics, existing models failed to produce satisfying results for graphics applications. In this paper, we focus on intrinsic image decomposition for clothing images and have comprehensive improvements. We collected CloIntrinsics, a clothing intrinsic image dataset, including a synthetic training set and a real-world testing set. A more interpretable edge-aware metric and an annotation scheme is designed for the testing set, which allows diagnostic evaluation for intrinsic models. Finally, we propose ClothInNet model with carefully designed loss terms and an adversarial module. It utilizes easy-to-acquire labels to learn from real-world shading, significantly improves performance with only minor additional annotation effort. We show that our proposed model significantly reduce texture-copying artifacts while retaining surprisingly tiny details, outperforming existing state-of-the-art methods.
翻译:重塑人类服装是一项重要任务,而且往往依赖内在图像分解。由于缺乏特定领域的数据和粗略的评估度量,现有模型未能为图形应用产生令人满意的结果。在本文中,我们侧重于服装图像的内在图像分解并进行全面改进。我们收集了CloIntrinsic, 服装内在图像数据集,包括合成培训组和现实世界测试组。为测试组设计了一个更易解释的边缘觉察度度度度度和注释计划,允许对内在模型进行诊断性评估。最后,我们提出了ClothInNet模型,配有精心设计的损失条件和对抗模块。它使用容易到获取的标签,从真实世界的阴影中学习,大大改进了性能,只做了少量额外的注解努力。我们展示了我们提议的模型在保留惊人的小细节的同时大量减少纹理复制工艺品,优于现有的最新方法。