Photo-realistic facial video portrait reenactment benefits virtual production and numerous VR/AR experiences. The task remains challenging as the portrait should maintain high realism and consistency with the target environment. In this paper, we present a relightable neural video portrait, a simultaneous relighting and reenactment scheme that transfers the head pose and facial expressions from a source actor to a portrait video of a target actor with arbitrary new backgrounds and lighting conditions. Our approach combines 4D reflectance field learning, model-based facial performance capture and target-aware neural rendering. Specifically, we adopt a rendering-to-video translation network to first synthesize high-quality OLAT imagesets and alpha mattes from hybrid facial performance capture results. We then design a semantic-aware facial normalization scheme to enable reliable explicit control as well as a multi-frame multi-task learning strategy to encode content, segmentation and temporal information simultaneously for high-quality reflectance field inference. After training, our approach further enables photo-realistic and controllable video portrait editing of the target performer. Reliable face poses and expression editing is obtained by applying the same hybrid facial capture and normalization scheme to the source video input, while our explicit alpha and OLAT output enable high-quality relit and background editing. With the ability to achieve simultaneous relighting and reenactment, we are able to improve the realism in a variety of virtual production and video rewrite applications.
翻译:照片真实化的面部视频照片重新制作有利于虚拟制作和许多 VR/AR 经验。 任务仍然是艰巨的, 因为肖像应该保持高度真实性和与目标环境的一致性。 在本文中, 我们展示了一个可喜的神经视频肖像, 一个同时亮光和重新激活的方案, 将头部和面部的表情从源演员转移到一个目标演员的肖像视频中, 任意的新背景和照明条件。 我们的方法结合了 4D 反映现场学习、 模型基础面部性能捕捉和有目标效果的神经造色。 具体地说, 我们采用了一个成像翻译网络, 以首次合成高质量的 OLAT 虚拟图像和来自混合面部性面部性能捕捉结果结果的阿尔法和阿尔法配方。 然后我们设计了一个可以可靠面部面部和A型配对面部的面部成和配对面部造影的配对面部图像的合成和配对, 将一个高性能的造影和制性能, 将一个高性色的面部造影和制版的造影系统, 实现高性造影和高性造影性造影的造影的造影的造影的造影系统, 实现了高性造影的造影的造影和制成的造影和制成一个高性能的造影和制成制成制成的造影。