This paper addresses the problem of face video inpainting. Existing video inpainting methods target primarily at natural scenes with repetitive patterns. They do not make use of any prior knowledge of the face to help retrieve correspondences for the corrupted face. They therefore only achieve sub-optimal results, particularly for faces under large pose and expression variations where face components appear very differently across frames. In this paper, we propose a two-stage deep learning method for face video inpainting. We employ 3DMM as our 3D face prior to transform a face between the image space and the UV (texture) space. In Stage I, we perform face inpainting in the UV space. This helps to largely remove the influence of face poses and expressions and makes the learning task much easier with well aligned face features. We introduce a frame-wise attention module to fully exploit correspondences in neighboring frames to assist the inpainting task. In Stage II, we transform the inpainted face regions back to the image space and perform face video refinement that inpaints any background regions not covered in Stage I and also refines the inpainted face regions. Extensive experiments have been carried out which show our method can significantly outperform methods based merely on 2D information, especially for faces under large pose and expression variations.
翻译:本文针对面部视频绘画问题。 现有的视频绘画方法主要针对自然场景的重复模式, 它们不使用任何先前的面部知识帮助为被腐蚀的面部检索信件。 因此, 它们只能取得亚最佳的结果, 特别是对于面部在巨大的面部和表情变化中呈现出非常不同的面部和表情变化。 在本文中, 我们提出了一个两阶段深层次的面部绘画方法 。 我们用3DMM作为我们的3D在图像空间和紫外线( Texture)空间之间转换脸部之前的3D面部。 在第一阶段, 我们在紫外线空间中进行面部涂画。 这有助于在很大程度上消除面部和表情的影响力,使学习任务更加容易于面部和表情变化。 我们引入了一个框架性关注模块, 充分利用相邻框中的通信来协助画画画工作。 在第二阶段, 我们将画画面区转变为图像空间, 并进行面部图像改进。 在任何背景区域中, 我们在UV( text) 进行面部空间的面部画面部的面部空间。 这也有助于消除面部面部的表面部的面部的面部变化,, 并且在2D 展示的面部的表面部的面部的外的面部的外的面部的外的面部的外的外的面部的外演化方法也显示显示了我们的图, 。