Model-based reinforcement learning methods often use learning only for the purpose of estimating an approximate dynamics model, offloading the rest of the decision-making work to classical trajectory optimizers. While conceptually simple, this combination has a number of empirical shortcomings, suggesting that learned models may not be well-suited to standard trajectory optimization. In this paper, we consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem, such that sampling from the model and planning with it become nearly identical. The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories. We show how classifier-guided sampling and image inpainting can be reinterpreted as coherent planning strategies, explore the unusual and useful properties of diffusion-based planning methods, and demonstrate the effectiveness of our framework in control settings that emphasize long-horizon decision-making and test-time flexibility.
翻译:以模型为基础的强化学习方法往往只为了估计近似动态模型而使用学习方法,将其余决策工作从传统轨迹优化器中卸下。虽然在概念上简单,但这种结合具有若干经验缺陷,表明所学模型可能不适合标准轨迹优化。在本文中,我们考虑尽可能多地将轨迹优化管道叠入模型的问题,例如从模型中取样和规划几乎完全相同。我们技术方法的核心在于扩散概率模型,通过迭接地分解轨迹进行规划。我们展示了如何将分类器制导的采样和图象改成连贯一致的规划战略,探索了基于推广的规划方法的不寻常和有用特性,并展示了我们框架在注重长期决策与测试时间灵活性的控制环境中的有效性。