Coordinate-based volumetric representations have the potential to generate photo-realistic virtual avatars from images. However, virtual avatars also need to be controllable even to a novel pose that may not have been observed. Traditional techniques, such as LBS, provide such a function; yet it usually requires a hand-designed body template, 3D scan data, and limited appearance models. On the other hand, neural representation has been shown to be powerful in representing visual details, but are under explored on deforming dynamic articulated actors. In this paper, we propose TAVA, a method to create T emplate-free Animatable Volumetric Actors, based on neural representations. We rely solely on multi-view data and a tracked skeleton to create a volumetric model of an actor, which can be animated at the test time given novel pose. Since TAVA does not require a body template, it is applicable to humans as well as other creatures such as animals. Furthermore, TAVA is designed such that it can recover accurate dense correspondences, making it amenable to content-creation and editing tasks. Through extensive experiments, we demonstrate that the proposed method generalizes well to novel poses as well as unseen views and showcase basic editing capabilities.
翻译:以坐标为基础的体积图案具有从图像中生成光现实虚拟虚拟动因的潜力。然而,虚拟动因也需要能够控制,即使可能没有观察到的新面貌也是如此。传统技术,如LBS提供这样的功能;但通常需要手工设计的体形模板、 3D扫描数据和有限的外观模型。另一方面,神经表解显示在显示视觉细节方面力量强大,但正在探索变形动态表达的行为体。在本文中,我们提出TAVA,这是一种在神经表征的基础上创建无热量活性活性活性立体器的方法。我们仅仅依靠多视图数据和跟踪骨架来创建一个演员的体积模型,在试验时可以根据新的外观进行模拟。由于TAVA不需要身体模板,它适用于人类以及动物等其他动物。此外,TAVA的设计能够恢复准确的密密通信,使之适合内容的创建和编辑任务。通过广泛的实验,我们展示了作为基本理解力的新方法。