We introduce the first work to explore web-scale diffusion models for robotics. DALL-E-Bot enables a robot to rearrange objects in a scene, by first inferring a text description of those objects, then generating an image representing a natural, human-like arrangement of those objects, and finally physically arranging the objects according to that image. The significance is that we achieve this zero-shot using DALL-E, without needing any further data collection or training. Encouraging real-world results with human studies show that this is an exciting direction for the future of web-scale robot learning algorithms. We also propose a list of recommendations to the text-to-image community, to align further developments of these models with applications to robotics. Videos are available at: https://www.robot-learning.uk/dall-e-bot
翻译:我们首先介绍探索机器人网络规模扩散模型的首项工作。 DALL-E-Bot 使机器人能够在场景中重新排列物体,首先对这些物体进行文字描述,然后生成这些物体的自然和人式安排图像,最后根据该图像对物体进行物理排列。重要的是,我们用DALL-E实现这一零射,而无需进一步收集数据或培训。鼓励现实世界的人类研究结果表明,这是网络规模机器人学习算法未来的令人振奋的方向。我们还向文本到图像社区提出了一系列建议,以便使这些模型的进一步发展与机器人应用相协调。视频可在以下网址查阅:https://www.robot-learning.uk/dall-e-bot。