Our goal is to populate digital environments, in which the digital humans have diverse body shapes, move perpetually, and have plausible body-scene contact. The core challenge is to generate realistic, controllable, and infinitely long motions for diverse 3D bodies. To this end, we propose generative motion primitives via body surface markers, shortened as GAMMA. In our solution, we decompose the long-term motion into a time sequence of motion primitives. We exploit body surface markers and conditional variational autoencoder to model each motion primitive, and generate long-term motion by implementing the generative model recursively. To control the motion to reach a goal, we apply a policy network to explore the model latent space, and use a tree-based search to preserve the motion quality during testing. Experiments show that our method can produce more realistic and controllable motion than state-of-the-art data-driven method. With conventional path-finding algorithms, the generated human bodies can realistically move long distances for a long period of time in the scene. Code will be released for research purposes at: \url{https://yz-cnsdqz.github.io/eigenmotion/GAMMA/}
翻译:我们的目标是传播数字环境,使数字人类有不同的体形,永久移动,并具有貌似体光接触。核心挑战是为不同的3D体产生现实、可控和无限长的运动。为此,我们提出通过身体表面标记进行基因化运动原始,缩短为GAMMA。在我们的解决方案中,我们将长期运动分解为运动原始体的时间序列。我们利用身体表面标记和有条件的变异自动编码来模拟每个运动原始体,通过反复执行基因模型产生长期运动。为了控制运动以达到一个目标,我们应用一个政策网络来探索模型潜在空间,并在试验期间使用基于树的搜索来保持运动质量。实验表明,我们的方法比状态数据驱动的方法可以产生更现实和可控的动作。我们利用传统的路径测算算法,产生的人体可以在现场长期内现实地长距离移动。为了研究目的,将代码发布在:\urgimus/Ambrgivus/amus/ambrz/ambrnsds.