Motion synthesis in a dynamic environment has been a long-standing problem for character animation. Methods using motion capture data tend to scale poorly in complex environments because of their larger capturing and labeling requirement. Physics-based controllers are effective in this regard, albeit less controllable. In this paper, we present CARL, a quadruped agent that can be controlled with high-level directives and react naturally to dynamic environments. Starting with an agent that can imitate individual animation clips, we use Generative Adversarial Networks to adapt high-level controls, such as speed and heading, to action distributions that correspond to the original animations. Further fine-tuning through the deep reinforcement learning enables the agent to recover from unseen external perturbations while producing smooth transitions. It then becomes straightforward to create autonomous agents in dynamic environments by adding navigation modules over the entire process. We evaluate our approach by measuring the agent's ability to follow user control and provide a visual analysis of the generated motion to show its effectiveness.
翻译:动态环境中的合成一直是字符动画的长期问题。 使用动作抓取数据的方法往往在复杂环境中规模不高, 因为它们的捕捉和标签要求较大。 基于物理的控制器在这方面是有效的, 尽管控制能力较弱。 在本文中, 我们提出CARL, 这是一种四重的代理器, 可以用高层次指令来控制, 并且自然地对动态环境作出反应。 从能够模仿单个动画剪片的代理器开始, 我们使用 General Aversarial 网络来调整高级控制器, 如速度和方向, 使其适应与原始动画相对应的行动分布。 通过深加固学习进行进一步的微调, 使该代理器能够从未见的外部扰动中恢复, 同时产生平稳的过渡。 然后, 通过在整个过程中添加导航模块, 在动态环境中创建自主的代理器变得直接。 我们通过测量该代理器跟踪用户控制的能力来评估我们的方法, 并对生成的动作进行视觉分析, 以显示其有效性 。