We introduce the first work to explore web-scale diffusion models for robotics. DALL-E-Bot enables a robot to rearrange objects in a scene, by first inferring a text description of those objects, then generating an image representing a natural, human-like arrangement of those objects, and finally physically arranging the objects according to that image. The significance is that we achieve this zero-shot using DALL-E, without needing any further data collection or training. Encouraging real-world results with human studies show that this is a promising direction for the future of web-scale robot learning. We also propose a list of recommendations to the text-to-image community, to align further developments of these models with applications to robotics.
翻译:我们引入了首次探索机器人网络规模扩散模型的工作。 DALL-E-Bot 使机器人能够在场景中重新排列物体,首先推断出这些物体的文字描述,然后生成这些物体的自然和人类类型的安排图像,最后根据该图像对物体进行物理排列。重要的是,我们用DALL-E实现这一零射,而无需进一步收集数据或培训。鼓励现实世界的人类研究结果表明,这是网络规模机器人学习的未来的一个有希望的方向。我们还向文本到图像群提出了一系列建议,以便使这些模型的进一步发展与机器人应用相匹配。