Recent improvements in conditional generative modeling have made it possible to generate high-quality images from language descriptions alone. We investigate whether these methods can directly address the problem of sequential decision-making. We view decision-making not through the lens of reinforcement learning (RL), but rather through conditional generative modeling. To our surprise, we find that our formulation leads to policies that can outperform existing offline RL approaches across standard benchmarks. By modeling a policy as a return-conditional diffusion model, we illustrate how we may circumvent the need for dynamic programming and subsequently eliminate many of the complexities that come with traditional offline RL. We further demonstrate the advantages of modeling policies as conditional diffusion models by considering two other conditioning variables: constraints and skills. Conditioning on a single constraint or skill during training leads to behaviors at test-time that can satisfy several constraints together or demonstrate a composition of skills. Our results illustrate that conditional generative modeling is a powerful tool for decision-making.
翻译:我们调查这些方法能否直接解决顺序决策的问题。我们不是从强化学习的角度来看待决策,而是通过有条件的基因模型来看待决策。我们感到惊讶的是,我们发现,我们的设计导致的政策能够超越现有离线RL标准基准的方法。我们通过将政策建模作为返回条件的传播模式,说明我们如何避免需要动态编程,并随后消除传统离线RL带来的许多复杂问题。我们通过考虑另外两个条件变数,进一步证明模拟政策作为有条件的推广模式的优势:制约和技能。在培训期间设置单一的制约或技能导致在测试时的行为,可以同时满足若干限制或显示技能的构成。我们的结果表明,有条件的基因模型是决策的有力工具。