We present GANimator, a generative model that learns to synthesize novel motions from a single, short motion sequence. GANimator generates motions that resemble the core elements of the original motion, while simultaneously synthesizing novel and diverse movements. Existing data-driven techniques for motion synthesis require a large motion dataset which contains the desired and specific skeletal structure. By contrast, GANimator only requires training on a single motion sequence, enabling novel motion synthesis for a variety of skeletal structures e.g., bipeds, quadropeds, hexapeds, and more. Our framework contains a series of generative and adversarial neural networks, each responsible for generating motions in a specific frame rate. The framework progressively learns to synthesize motion from random noise, enabling hierarchical control over the generated motion content across varying levels of detail. We show a number of applications, including crowd simulation, key-frame editing, style transfer, and interactive control, which all learn from a single input sequence. Code and data for this paper are at https://peizhuoli.github.io/ganimator.
翻译:我们介绍GANimator, 这是一种基因模型, 学会综合单一、 短运动序列中的新动作。 GANimator 产生类似原始运动核心元素的动作, 同时综合新运动和多种运动。 现有的运动合成数据驱动技术需要大型运动数据集, 该数据集包含理想和特定的骨骼结构。 相反, GANimator 只要求就单一运动序列进行培训, 使新运动合成能够用于各种骨骼结构, 如双肢、 夸脱式、 六肢等。 我们的框架包含一系列基因化和对抗性神经网络, 每个网络都负责以特定框架速率生成运动。 框架逐渐学会从随机噪音中合成运动, 使对产生的运动内容的等级控制能够跨越不同层次的细节。 我们展示了一些应用, 包括人群模拟、 键框架编辑、 风格传输 和互动控制, 这些应用都从单一输入序列中学习。 本文的代码和数据见 https://peizhuoli.github.io/ganimator。