Our task is to remove all facial parts (e.g., eyebrows, eyes, mouth and nose), and then impose visual elements onto the ``blank'' face for augmented reality. Conventional object removal methods rely on image inpainting techniques (e.g., EdgeConnect, HiFill) that are trained in a self-supervised manner with randomly manipulated image pairs. Specifically, given a set of natural images, randomly masked images are used as inputs and the raw images are treated as ground truths. Whereas, this technique does not satisfy the requirements of facial parts removal, as it is hard to obtain ``ground-truth'' images with real ``blank'' faces. To address this issue, we propose a novel data generation technique to produce paired training data that well mimic the ``blank'' faces. In the mean time, we propose a novel network architecture for improved inpainting quality for our task. Finally, we demonstrate various face-oriented augmented reality applications on top of our facial parts removal model. The source codes are released at \href{https://github.com/duxingren14/FaceEraser}{duxingren14/FaceEraser} on github for research purposes.
翻译:我们的任务是清除所有面部部分(如眉毛、眼睛、嘴和鼻子),然后将视觉元素强加到“blank”脸上,以扩大现实。常规物体清除方法依靠的是用随机操纵的图像配对进行自我监督训练的图像涂色技术(如Edgeconnect、HiFill)。具体地说,根据一套自然图像,随机遮盖的图像被用作输入材料,原始图像被当作地面真相处理。而这种技术并不能满足面部清除的要求,因为很难用真实的“blank”面部获得“地面-真相”图像。为了解决这个问题,我们建议一种新型数据生成技术,以生成匹配“blank”面部图像的配对培训数据。在正常时间里,我们建议一种新型网络架构,以改进我们的任务的油漆质量。最后,我们展示了面向面的扩大现实应用,在面部去除模型顶部面部的顶部。源代码在\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\