Animating virtual avatars with free-view control is crucial for various applications like virtual reality and digital entertainment. Previous studies have attempted to utilize the representation power of the neural radiance field (NeRF) to reconstruct the human body from monocular videos. Recent works propose to graft a deformation network into the NeRF to further model the dynamics of the human neural field for animating vivid human motions. However, such pipelines either rely on pose-dependent representations or fall short of motion coherency due to frame-independent optimization, making it difficult to generalize to unseen pose sequences realistically. In this paper, we propose a novel framework MonoHuman, which robustly renders view-consistent and high-fidelity avatars under arbitrary novel poses. Our key insight is to model the deformation field with bi-directional constraints and explicitly leverage the off-the-peg keyframe information to reason the feature correlations for coherent results. Specifically, we first propose a Shared Bidirectional Deformation module, which creates a pose-independent generalizable deformation field by disentangling backward and forward deformation correspondences into shared skeletal motion weight and separate non-rigid motions. Then, we devise a Forward Correspondence Search module, which queries the correspondence feature of keyframes to guide the rendering network. The rendered results are thus multi-view consistent with high fidelity, even under challenging novel pose settings. Extensive experiments demonstrate the superiority of our proposed MonoHuman over state-of-the-art methods.
翻译:在各种应用(如虚拟现实和数字娱乐)中,以自由视角控制虚拟化身的动画非常关键。以前的研究尝试利用神经辐射场(NeRF)的表示能力从单眼视频重建人体。最近的研究提出将变形网络嫁接到NeRF中,以进一步模拟人类神经场的动态,以实现生动的人类动作。然而,这种流程要么依赖于姿势相关的表示,要么由于独立于帧的优化而缺乏运动一致性,使得难以真实地推广到未见过的姿势序列。在本文中,我们提出了一个新颖的框架MonoHuman,可以在任意新姿势下稳健地呈现视图一致和高保真度的化身。我们的关键洞察是利用双向约束模型变形场,并明确利用预先制作的关键帧信息推理特征相关性以获得连贯的结果。具体而言,我们首先提出了一个共享双向形变模块,通过将后向和前向形变对应分解为共享的骨骼运动权重和独立的非刚性运动,创建了一种姿势无关的可推广形变场。然后,我们设计了一个正向对应搜索模块,通过查询关键帧的对应特征来指导渲染网络,使渲染结果在挑战性的新姿势设置下多视图一致且高保真度。大量实验证明了我们所提出的MonoHuman优于现有的最先进方法。