We present a method for fast 3D reconstruction and real-time rendering of dynamic humans from monocular videos with accompanying parametric body fits. Our method can reconstruct a dynamic human in less than 3h using a single GPU, compared to recent state-of-the-art alternatives that take up to 72h. These speedups are obtained by using a lightweight deformation model solely based on linear blend skinning, and an efficient factorized volumetric representation for modeling the shape and color of the person in canonical pose. Moreover, we propose a novel local ray marching rendering which, by exploiting standard GPU hardware and without any baking or conversion of the radiance field, allows visualizing the neural human on a mobile VR device at 40 frames per second with minimal loss of visual quality. Our experimental evaluation shows superior or competitive results with state-of-the art methods while obtaining large training speedup, using a simple model, and achieving real-time rendering.
翻译:我们提出了一种从单目视频中快速重建和实时渲染具有伴随参数化身体匹配的动态人类的方法。与最近的最先进的替代方法需要长达72小时相比,我们的方法可以在不到3小时内使用单个GPU重建动态人体。这些加速是通过仅基于线性混合皮肤使用轻量级变形模型以及用于建模人物在规范姿势下的形状和颜色的高效分解体积表示来实现的。此外,我们提出了一种新的局部光线行进渲染方法,通过利用标准GPU硬件,而不需要对辐射场进行任何烘焙或转换,允许在移动VR设备上以40帧每秒的速度可视化神经人体并最小化视觉质量的损失。我们的实验评估显示出与最先进的方法相比具有优越或竞争性的结果,同时获得了大量的训练加速,使用简单的模型并实现实时渲染。