Learning dexterous and agile policy for humanoid and dexterous hand control requires large-scale demonstrations, but collecting robot-specific data is prohibitively expensive. In contrast, abundant human motion data is readily available from motion capture, videos, and virtual reality, which could help address the data scarcity problem. However, due to the embodiment gap and missing dynamic information like force and torque, these demonstrations cannot be directly executed on robots. To bridge this gap, we propose Scalable Physics-Informed DExterous Retargeting (SPIDER), a physics-based retargeting framework to transform and augment kinematic-only human demonstrations to dynamically feasible robot trajectories at scale. Our key insight is that human demonstrations should provide global task structure and objective, while large-scale physics-based sampling with curriculum-style virtual contact guidance should refine trajectories to ensure dynamical feasibility and correct contact sequences. SPIDER scales across diverse 9 humanoid/dexterous hand embodiments and 6 datasets, improving success rates by 18% compared to standard sampling, while being 10X faster than reinforcement learning (RL) baselines, and enabling the generation of a 2.4M frames dynamic-feasible robot dataset for policy learning. As a universal physics-based retargeting method, SPIDER can work with diverse quality data and generate diverse and high-quality data to enable efficient policy learning with methods like RL.
翻译:为人形机器人和灵巧手学习灵巧敏捷的控制策略需要大规模演示数据,但采集机器人专用数据的成本极高。相比之下,从动作捕捉、视频和虚拟现实中可获得丰富的人类运动数据,这有助于缓解数据稀缺问题。然而,由于实体差异以及缺乏力与扭矩等动态信息,这些演示无法直接在机器人上执行。为弥合此差距,我们提出可扩展的物理信息灵巧重定向框架(SPIDER),这是一种基于物理的重定向方法,能够将仅包含运动学信息的人类演示大规模转化为动态可行的机器人轨迹。我们的核心见解是:人类演示应提供全局任务结构与目标,而基于课程式虚拟接触引导的大规模物理采样则应优化轨迹,以确保动态可行性及正确的接触序列。SPIDER可扩展应用于9种不同的人形机器人/灵巧手实体与6个数据集,相比标准采样方法成功率提升18%,同时比强化学习基线快10倍,并能生成包含240万帧动态可行机器人数据的数据集用于策略学习。作为一种通用的基于物理的重定向方法,SPIDER可兼容不同质量的数据源,并生成多样化、高质量的数据,从而支持通过强化学习等方法实现高效策略学习。