Our goal is to populate digital environments, in which digital humans have diverse body shapes, move perpetually, and have plausible body-scene contact. The core challenge is to generate realistic, controllable, and infinitely long motions for diverse 3D bodies. To this end, we propose generative motion primitives via body surface markers, or GAMMA in short. In our solution, we decompose the long-term motion into a time sequence of motion primitives. We exploit body surface markers and conditional variational autoencoder to model each motion primitive, and generate long-term motion by implementing the generative model recursively. To control the motion to reach a goal, we apply a policy network to explore the generative model's latent space and use a tree-based search to preserve the motion quality during testing. Experiments show that our method can produce more realistic and controllable motion than state-of-the-art data-driven methods. With conventional path-finding algorithms, the generated human bodies can realistically move long distances for a long period of time in the scene. Code is released for research purposes at: \url{https://yz-cnsdqz.github.io/eigenmotion/GAMMA/}
翻译:我们的目标是传播数字环境,使数字人类有不同的体形,永远移动,并有合理的体光接触。核心挑战是为不同的3D体产生现实的、可控制的和无限长的动作。为此,我们提议通过身体表面标记或短的GAMMA提出基因运动原始体。在我们的解决方案中,我们将长期运动分解成运动原始体的时间序列。我们利用身体表面标记和有条件的变异自动编码来模拟每个运动原始体,通过反复执行基因模型产生长期运动。为了控制运动以达到一个目标,我们应用一个政策网络来探索基因模型的潜在空间,并在试验期间利用树基搜索来保持运动质量。实验表明,我们的方法可以产生比最先进的数据驱动方法更现实和可控的运动。我们利用传统的路径调查算法,产生的人体可以在现场长期内实际移动。为了研究目的,代码在:GAGA/Murlig/Argniusm_Argivus/Ambrz/Argnius/Ambrgnius/Asmasmasmasmasmasmas)。