Large-scale robot learning has recently shown promise for enabling robots to perform complex tasks by integrating perception, control, and language understanding. Yet, it struggles with long-horizon, contact-rich manipulation such as deformable object handling, where demonstration quality is inconsistent. Reward modeling offers a natural solution: by providing grounded progress signals, it transforms noisy demonstrations into stable supervision that generalizes across diverse trajectories. We introduce a stage-aware, video-based reward modeling framework that jointly predicts high-level task stages and fine-grained progress. Reward labels are automatically derived from natural language subtask annotations, ensuring consistent progress estimation across variable-length demonstrations. This design overcomes frame-index labeling, which fails in variable-duration tasks like folding a T-shirt. Our reward model demonstrates robustness to variability, generalization to out-of-distribution settings, and strong utility for policy training. Building on it, we propose Reward-Aligned Behavior Cloning (RA-BC), which filters high-quality data and reweights samples by reward. Experiments show the reward model alone outperforms baselines on validation and real robot rollouts. Integrated into RA-BC, our approach achieves 83% success on folding T-shirts from the flattened state and 67% from the crumpled state -- far surpassing vanilla behavior cloning, which attains only 8% and 0% success. Overall, our results highlight reward modeling as a key enabler for scalable, annotation-efficient, and robust imitation learning in long-horizon manipulation.
翻译:大规模机器人学习近期展现出通过整合感知、控制与语言理解使机器人执行复杂任务的潜力。然而,其在长时程、接触密集的操作(如可变形物体处理)中仍面临挑战,其中演示质量参差不齐。奖励建模提供了一种自然的解决方案:通过提供基于进度的信号,它将噪声演示转化为稳定的监督信号,并能泛化至多样化的轨迹。我们提出了一种基于视频的阶段感知奖励建模框架,该框架联合预测高层任务阶段与细粒度进度。奖励标签自动从自然语言子任务标注中推导,确保了在可变长度演示中进度估计的一致性。这一设计克服了帧索引标注在可变时长任务(如折叠T恤)中的失效问题。我们的奖励模型展现出对变异的鲁棒性、对分布外场景的泛化能力,以及对策略训练的强实用性。在此基础上,我们提出了奖励对齐的行为克隆(RA-BC),该方法通过奖励筛选高质量数据并重新加权样本。实验表明,仅奖励模型在验证和真实机器人部署上已优于基线方法。集成至RA-BC后,我们的方法在从平整状态折叠T恤的任务中达到83%的成功率,从皱褶状态达到67%——远超仅获得8%和0%成功率的传统行为克隆。总体而言,我们的结果凸显了奖励建模作为长时程操作中可扩展、标注高效且鲁棒的模仿学习的关键推动力。