We present STORM (Search-Guided Generative World Models), a novel framework for spatio-temporal reasoning in robotic manipulation that unifies diffusion-based action generation, conditional video prediction, and search-based planning. Unlike prior Vision-Language-Action (VLA) models that rely on abstract latent dynamics or delegate reasoning to language components, STORM grounds planning in explicit visual rollouts, enabling interpretable and foresight-driven decision-making. A diffusion-based VLA policy proposes diverse candidate actions, a generative video world model simulates their visual and reward outcomes, and Monte Carlo Tree Search (MCTS) selectively refines plans through lookahead evaluation. Experiments on the SimplerEnv manipulation benchmark demonstrate that STORM achieves a new state-of-the-art average success rate of 51.0 percent, outperforming strong baselines such as CogACT. Reward-augmented video prediction substantially improves spatio-temporal fidelity and task relevance, reducing Frechet Video Distance by over 75 percent. Moreover, STORM exhibits robust re-planning and failure recovery behavior, highlighting the advantages of search-guided generative world models for long-horizon robotic manipulation.
翻译:本文提出STORM(搜索引导生成式世界模型),这是一种用于机器人操作中时空推理的新型框架,它统一了基于扩散的动作生成、条件视频预测和基于搜索的规划。与先前依赖抽象潜在动态或将推理委托给语言组件的视觉-语言-动作(VLA)模型不同,STORM将规划建立在显式的视觉推演之上,从而实现可解释且具有前瞻性的决策。一个基于扩散的VLA策略提出多样化的候选动作,一个生成式视频世界模型模拟其视觉结果与奖励结果,而蒙特卡洛树搜索(MCTS)则通过前瞻性评估有选择地优化规划。在SimplerEnv操作基准测试上的实验表明,STORM实现了51.0%的最新最优平均成功率,超越了如CogACT等强基线方法。奖励增强的视频预测显著提高了时空保真度与任务相关性,将弗雷歇视频距离降低了超过75%。此外,STORM展现出强大的重规划和故障恢复行为,突显了搜索引导生成式世界模型在长视野机器人操作任务中的优势。