Achieving efficient and robust whole-body control (WBC) is essential for enabling humanoid robots to perform complex tasks in dynamic environments. Despite the success of reinforcement learning (RL) in this domain, its sample inefficiency remains a significant challenge due to the intricate dynamics and partial observability of humanoid robots. To address this limitation, we propose PvP, a Proprioceptive-Privileged contrastive learning framework that leverages the intrinsic complementarity between proprioceptive and privileged states. PvP learns compact and task-relevant latent representations without requiring hand-crafted data augmentations, enabling faster and more stable policy learning. To support systematic evaluation, we develop SRL4Humanoid, the first unified and modular framework that provides high-quality implementations of representative state representation learning (SRL) methods for humanoid robot learning. Extensive experiments on the LimX Oli robot across velocity tracking and motion imitation tasks demonstrate that PvP significantly improves sample efficiency and final performance compared to baseline SRL methods. Our study further provides practical insights into integrating SRL with RL for humanoid WBC, offering valuable guidance for data-efficient humanoid robot learning.
翻译:实现高效且鲁棒的全身体控制(WBC)对于人形机器人在动态环境中执行复杂任务至关重要。尽管强化学习(RL)在该领域取得了成功,但由于人形机器人复杂的动力学特性和部分可观测性,其样本效率低仍是一个重大挑战。为应对这一局限,我们提出了PvP(Proprioceptive-Privileged对比学习框架),该框架利用本体感觉状态与特权状态之间的内在互补性。PvP无需人工设计数据增强,即可学习紧凑且与任务相关的潜在表征,从而实现更快、更稳定的策略学习。为支持系统化评估,我们开发了SRL4Humanoid——首个统一且模块化的框架,为人形机器人学习提供了代表性状态表征学习(SRL)方法的高质量实现。在LimX Oli机器人上进行的速度跟踪和运动模仿任务的大量实验表明,相较于基线SRL方法,PvP显著提升了样本效率和最终性能。本研究进一步为将SRL与RL集成于人形机器人WBC提供了实践见解,为数据高效的人形机器人学习提供了有价值的指导。