We tackle the problem of generating long-term 3D human motion from multiple action labels. Two main previous approaches, such as action- and motion-conditioned methods, have limitations to solve this problem. The action-conditioned methods generate a sequence of motion from a single action. Hence, it cannot generate long-term motions composed of multiple actions and transitions between actions. Meanwhile, the motion-conditioned methods generate future motions from initial motion. The generated future motions only depend on the past, so they are not controllable by the user's desired actions. We present MultiAct, the first framework to generate long-term 3D human motion from multiple action labels. MultiAct takes account of both action and motion conditions with a unified recurrent generation system. It repetitively takes the previous motion and action label; then, it generates a smooth transition and the motion of the given action. As a result, MultiAct produces realistic long-term motion controlled by the given sequence of multiple action labels. Codes are available here at https://github.com/TaeryungLee/MultiAct_RELEASE.
翻译:我们处理从多个动作标签中产生长期的 3D 人类运动的问题。 前两种主要方法,例如行动和运动附加条件的方法,都具有解决问题的局限性。 行动附加条件的方法产生一个动作的顺序。 因此, 它不能产生由多个动作和动作之间的过渡组成的长期动议。 同时, 运动附加条件的方法产生来自初始动作的未来动议。 产生的未来动议只取决于过去, 因而无法被用户想要的行动所控制。 我们介绍MultiAct, 即从多个动作标签中产生长期的 3D 人类运动的第一个框架。 多种行动考虑到动作和动作条件, 并使用统一的循环生成系统。 它重复地使用先前的动作和动作标签; 然后, 它产生一个平稳的过渡和特定动作的动作动作。 结果, 多功能法案产生由多个动作标签的给定顺序所控制的符合现实的长期动议。 代码可在 https://github.com/TaeryungLee/MultiAD_REESE 。