Generating digital humans that move realistically has many applications and is widely studied, but existing methods focus on the major limbs of the body, ignoring the hands and head. Hands have been separately studied but the focus has been on generating realistic static grasps of objects. To synthesize virtual characters that interact with the world, we need to generate full-body motions and realistic hand grasps simultaneously. Both sub-problems are challenging on their own and, together, the state-space of poses is significantly larger, the scales of hand and body motions differ, and the whole-body posture and the hand grasp must agree, satisfy physical constraints, and be plausible. Additionally, the head is involved because the avatar must look at the object to interact with it. For the first time, we address the problem of generating full-body, hand and head motions of an avatar grasping an unknown object. As input, our method, called GOAL, takes a 3D object, its position, and a starting 3D body pose and shape. GOAL outputs a sequence of whole-body poses using two novel networks. First, GNet generates a goal whole-body grasp with a realistic body, head, arm, and hand pose, as well as hand-object contact. Second, MNet generates the motion between the starting and goal pose. This is challenging, as it requires the avatar to walk towards the object with foot-ground contact, orient the head towards it, reach out, and grasp it with a realistic hand pose and hand-object contact. To achieve this the networks exploit a representation that combines SMPL-X body parameters and 3D vertex offsets. We train and evaluate GOAL, both qualitatively and quantitatively, on the GRAB dataset. Results show that GOAL generalizes well to unseen objects, outperforming baselines. GOAL takes a step towards synthesizing realistic full-body object grasping.
翻译:产生数字人,现实地移动,具有许多应用程序,而且正在广泛研究,但现有方法侧重于身体的主要四肢,忽略手和头。手已经分开研究,但重点是产生现实的静态对象。为了合成与世界互动的虚拟字符,我们需要同时产生全体运动和现实的手握。两个子问题本身都具有挑战性,同时,姿势的状态空间也大得多,手和身体运动的规模不同,整个身体的姿势和握手必须同意,满足物理限制,而且看起来可信。此外,头也涉及到,因为阿瓦塔必须看物体来与它互动。我们第一次处理产生与世界互动的全体、手和头动作的问题。作为输入,我们称为GOAL的方法,一个3D对象,一个3D身体的姿势和形状。GOAL 将整个身体的姿势和姿势利用两个新的网络进行计算。首先,GNet的姿势,一个直径,一个直径,一个直方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,一个方向,