We present an implicit neural representation to learn the spatio-temporal space of kinematic motions. Unlike previous work that represents motion as discrete sequential samples, we propose to express the vast motion space as a continuous function over time, hence the name Neural Motion Fields (NeMF). Specifically, we use a neural network to learn this function for miscellaneous sets of motions, which is designed to be a generative model conditioned on a temporal coordinate $t$ and a random vector $z$ for controlling the style. The model is then trained as a Variational Autoencoder (VAE) with motion encoders to sample the latent space. We train our model with diverse human motion dataset and quadruped dataset to prove its versatility, and finally deploy it as a generic motion prior to solve task-agnostic problems and show its superiority in different motion generation and editing applications, such as motion interpolation, in-betweening, and re-navigating.
翻译:我们展示了一种隐含的神经空间,以学习运动运动的时空空间。与以往作为离散相继样本的运动不同,我们提议将巨大的运动空间作为长期持续功能来表达,因此命名神经运动场(NeMF ) 。 具体地说,我们使用神经网络来学习各种运动的这一功能,该功能的设计是一种基因模型,以时间坐标为条件,用美元和随机矢量为z美元来控制风格。然后,该模型被训练为具有运动编码的变异自动编码(VAE ), 用运动编码作为潜在空间的样本。我们用各种人类运动数据集和四重组合数据集来培训我们的模型,以证明它的多功能性。我们最终将它作为一般动作,先解决任务敏感问题,并在不同的运动生成和编辑应用中显示其优势,例如运动间断和再导航。