The recent increase in popularity of volumetric representations for scene reconstruction and novel view synthesis has put renewed focus on animating volumetric content at high visual quality and in real-time. While implicit deformation methods based on learned functions can produce impressive results, they are `black boxes' to artists and content creators, they require large amounts of training data to generalise meaningfully, and they do not produce realistic extrapolations outside the training data. In this work we solve these issues by introducing a volume deformation method which is real-time, easy to edit with off-the-shelf software and can extrapolate convincingly. To demonstrate the versatility of our method, we apply it in two scenarios: physics-based object deformation and telepresence where avatars are controlled using blendshapes. We also perform thorough experiments showing that our method compares favourably to both volumetric approaches combined with implicit deformation and methods based on mesh deformation.
翻译:最近对现场重建和新观点合成的体积展示的受欢迎程度的提高,重新强调以高视觉质量和实时方式对体积内容进行动画,在基于学习功能的隐性变形方法可以产生令人印象深刻的结果的同时,它们也是艺术家和内容创作者的“黑盒”,它们需要大量的培训数据才能有意义地概括,而且它们不会产生培训数据之外的现实外推法。在这项工作中,我们采用量变形方法解决这些问题,这种方法是实时的,很容易用现成软件编辑,并且可以令人信服地推断出。为了展示我们方法的多功能,我们将其应用于两种情景:基于物理的物体变形和电传感,即使用混合形状控制了阿凡达人。我们还进行了彻底的实验,表明我们的方法优于体积方法,与隐含的变形法和基于网状变形的方法相比较。