We present an implicit neural representation to learn the spatio-temporal space of kinematic motions. Unlike previous work that represents motion as discrete sequential samples, we propose to express the vast motion space as a continuous function over time, hence the name Neural Motion Fields (NeMF). Specifically, we use a neural network to learn this function for miscellaneous sets of motions, which is designed to be a generative model conditioned on a temporal coordinate $t$ and a random vector $z$ for controlling the style. The model is then trained as a Variational Autoencoder (VAE) with motion encoders to sample the latent space. We train our model with diverse human motion dataset and quadruped dataset to prove its versatility, and finally deploy it as a generic motion prior to solve task-agnostic problems and show its superiority in different motion generation and editing applications, such as motion interpolation, in-betweening, and re-navigating. More details can be found on our project page: https://cs.yale.edu/homes/che/projects/nemf/
翻译:与先前作为离散相继样本的运动不同,我们提议将巨大的运动空间作为长期连续功能来表达,因此命名神经运动场(NEMF ) 。 具体地说,我们使用神经网络来学习各种运动的这种功能,这种运动是为了了解各种运动的这种功能,设计成一种基因模型,以时间坐标为条件,以美元和随机矢量为条件,控制风格。然后,该模型被培训为具有运动编码的变异自动计算器(VAE),以模拟潜层空间。我们用各种人类运动数据集和四重数据组来培训我们的模型,以证明它的多功能。我们最后将它作为一般动作,先解决任务敏感问题,并在不同的运动生成和编辑应用程序中显示其优势,例如运动间断和再导航。更多细节可见于我们的项目网页: https://cs.yale.edu/homes/che/producies/nef/production/