Visuomotor imitation learning policies enable robots to efficiently acquire manipulation skills from visual demonstrations. However, as scene complexity and visual distractions increase, policies that perform well in simple settings often experience substantial performance degradation. To address this challenge, we propose ImitDiff, a diffusion-based imitation learning policy guided by fine-grained semantics within a dual-resolution workflow. Leveraging pretrained priors of vision-language foundation models, our method transforms high-level instructions into pixel-level visual semantic masks. These masks guide a dual-resolution perception pipeline that captures both global context (e.g., overall layout) from low-resolution observation and fine-grained local features (e.g., geometric details) from high-resolution observation, enabling the policy to focus on task-relevant regions. Additionally, we introduce a consistency-driven diffusion transformer action head that bridges visual semantic conditions and real-time action generation. Extensive experiments demonstrate that ImitDiff outperforms state-of-the-art vision-language manipulation frameworks, as well as visuomotor imitation learning policies, particularly under increased scene complexity and visual distractions. Notably, ImitDiff exhibits strong generalization in zero-shot settings involving novel objects and visual distractions. Furthermore, our consistency-driven action head achieves an order-of-magnitude improvement in inference speed while maintaining competitive success rates.
翻译:视觉运动模仿学习策略使机器人能够从视觉演示中高效习得操作技能。然而,随着场景复杂性和视觉干扰的增加,在简单场景中表现良好的策略往往会出现显著的性能下降。为应对这一挑战,我们提出ImitDiff,一种基于扩散模型的模仿学习策略,通过双分辨率工作流中的细粒度语义进行引导。该方法利用视觉-语言基础模型的预训练先验,将高层指令转化为像素级视觉语义掩码。这些掩码引导一个双分辨率感知流程,从低分辨率观测中捕获全局上下文(如整体布局),并从高分辨率观测中提取细粒度局部特征(如几何细节),从而使策略能够聚焦于任务相关区域。此外,我们引入了一种一致性驱动的扩散Transformer动作头,以桥接视觉语义条件与实时动作生成。大量实验表明,ImitDiff在日益复杂的场景和视觉干扰下,其性能优于当前最先进的视觉-语言操作框架及视觉运动模仿学习策略。值得注意的是,ImitDiff在涉及新物体和视觉干扰的零样本设置中展现出强大的泛化能力。此外,我们的一致性驱动动作头在保持竞争力的成功率的同时,实现了推理速度的数量级提升。