We present a novel data-driven framework for unsupervised human motion retargeting which animates a target body shape with a source motion. This allows to retarget motions between different characters by animating a target subject with a motion of a source subject. Our method is correspondence-free,~\ie neither spatial correspondences between the source and target shapes nor temporal correspondences between different frames of the source motion are required. Our proposed method directly animates a target shape with arbitrary sequences of humans in motion, possibly captured using 4D acquisition platforms or consumer devices. Our framework takes into account long-term temporal context of $1$ second during retargeting while accounting for surface details. To achieve this, we take inspiration from two lines of existing work: skeletal motion retargeting, which leverages long-term temporal context at the cost of surface detail, and surface-based retargeting, which preserves surface details without considering long-term temporal context. We unify the advantages of these works by combining a learnt skinning field with a skeletal retargeting approach. During inference, our method runs online,~\ie the input can be processed in a serial way, and retargeting is performed in a single forward pass per frame. Experiments show that including long-term temporal context during training improves the method's accuracy both in terms of the retargeted skeletal motion and the detail preservation. Furthermore, our method generalizes well on unobserved motions and body shapes. We demonstrate that the proposed framework achieves state-of-the-art results on two test datasets.
翻译:我们为不受监督的人类运动重新定位提出了一个新的数据驱动框架, 它会用源动来刺激目标体形状, 从而通过用源子主题的动作来刺激目标对象对象, 从而可以重新定位不同字符之间的运动。 我们的方法是无对等的, 既不需要源和目标形状之间的空间对应, 也不需要源动的不同框架之间的时间对应。 我们的拟议方法直接刺激一个目标形状, 以人类运动的任意序列为对象, 可能使用 4D 获取平台或消费设备捕获 。 我们的框架考虑到在重新定位时长期时间背景为1美元, 并同时计算表面细节。 为了实现这一点, 我们从现有工作的两条线中得到灵感: 骨骼运动重新定位, 以表面细节为代价利用长期时间背景, 以及基于地表的重新定位, 保存表面细节, 不考虑长期的时间背景。 我们将这些作品的优点统一起来, 将学习过的皮肤外观与皮肤重新定位方法结合起来。 在推断过程中, 我们的方法是在线运行1美元的长期时间环境背景, 输入可以显示一个长期的试验框架, 在一次实验中, 的精确度框架中, 显示一个试验框架的精确的精确, 以显示一个持续的顺序的顺序的顺序,, 测试方法可以显示一个连续的精确度,, 显示一个试验的顺序的顺序的顺序的顺序的顺序的顺序,,, 显示一个试验框架,, 显示,, 的精确度, 显示一个试验框架的精确度,, 显示一个试验的精确度,,, 和直径直径直径直径,,,,,, 显示一个试验框架,, 显示一个试验框架, 显示, 显示一个试验的直径,,,,,,,, 显示一个试验框架,,, 显示, 的精确,, 直径,, 显示, 直径直线,,, 显示, 显示,, 显示,, 显示,,,, 直, 显示, 直, 显示, 显示, 显示, 显示, 显示, 显示,, 和直, 直, 直, 直,