Vision-Language-Action (VLA) models have demonstrated significant potential in real-world robotic manipulation. However, pre-trained VLA policies still suffer from substantial performance degradation during downstream deployment. Although fine-tuning can mitigate this issue, its reliance on costly demonstration collection and intensive computation makes it impractical in real-world settings. In this work, we introduce VLA-Pilot, a plug-and-play inference-time policy steering method for zero-shot deployment of pre-trained VLA without any additional fine-tuning or data collection. We evaluate VLA-Pilot on six real-world downstream manipulation tasks across two distinct robotic embodiments, encompassing both in-distribution and out-of-distribution scenarios. Experimental results demonstrate that VLA-Pilot substantially boosts the success rates of off-the-shelf pre-trained VLA policies, enabling robust zero-shot generalization to diverse tasks and embodiments. Experimental videos and code are available at: https://rip4kobe.github.io/vla-pilot/.
翻译:视觉-语言-动作(VLA)模型在现实世界机器人操作中展现出巨大潜力。然而,预训练的VLA策略在下游部署时仍存在显著的性能下降。尽管微调可以缓解此问题,但其对昂贵演示数据收集和密集计算资源的依赖,使其在实际应用中难以实施。本研究提出VLA-Pilot,一种即插即用的推理时策略引导方法,可在无需额外微调或数据收集的情况下实现预训练VLA的零样本部署。我们在两种不同机器人实体上,针对六项现实世界下游操作任务(涵盖分布内与分布外场景)评估VLA-Pilot。实验结果表明,VLA-Pilot显著提升了现成预训练VLA策略的成功率,实现了对多样化任务与机器人实体的鲁棒零样本泛化。实验视频与代码发布于:https://rip4kobe.github.io/vla-pilot/。