We propose BareSkinNet, a novel method that simultaneously removes makeup and lighting influences from the face image. Our method leverages a 3D morphable model and does not require a reference clean face image or a specified light condition. By combining the process of 3D face reconstruction, we can easily obtain 3D geometry and coarse 3D textures. Using this information, we can infer normalized 3D face texture maps (diffuse, normal, roughness, and specular) by an image-translation network. Consequently, reconstructed 3D face textures without undesirable information will significantly benefit subsequent processes, such as re-lighting or re-makeup. In experiments, we show that BareSkinNet outperforms state-of-the-art makeup removal methods. In addition, our method is remarkably helpful in removing makeup to generate consistent high-fidelity texture maps, which makes it extendable to many realistic face generation applications. It can also automatically build graphic assets of face makeup images before and after with corresponding 3D data. This will assist artists in accelerating their work, such as 3D makeup avatar creation.
翻译:我们建议使用BareSkinNet, 这是一种同时从脸部图像中去除化妆和照明影响的新颖方法。 我们的方法利用了3D变形模型, 不需要参考清洁脸部图像或指定的光质条件。 通过将 3D 面部重建的过程结合起来, 我们很容易获得 3D 几何和粗美 3D 纹理。 使用这些信息, 我们可以通过图像转换网络推导 3D 面部纹理图( 吸入、 正常、 粗糙和视觉) 正常化 。 因此, 重建的 3D 面部纹理如果没有不可取的信息, 将会大大有利于以后的进程, 如重新点亮或重新造色。 在实验中, 我们显示 BareSkinNet 超越了 3D 的状态化方法。 此外, 我们的方法非常有助于去除化妆, 以生成一致的高纤维素质素解质图, 从而可以扩展到许多现实的面部生成应用。 它还可以自动建立面部图像的图形资产, 以及相应的 3D 数据 。 这将帮助艺术家加速他们的工作, 比如 3D 。