Programming robot behaviour in a complex world faces challenges on multiple levels, from dextrous low-level skills to high-level planning and reasoning. Recent pre-trained Large Language Models (LLMs) have shown remarkable reasoning ability in zero-shot robotic planning. However, it remains challenging to ground LLMs in multimodal sensory input and continuous action output, while enabling a robot to interact with its environment and acquire novel information as its policies unfold. We develop a robot interaction scenario with a partially observable state, which necessitates a robot to decide on a range of epistemic actions in order to sample sensory information among multiple modalities, before being able to execute the task correctly. An interactive perception framework is therefore proposed with an LLM as its backbone, whose ability is exploited to instruct epistemic actions and to reason over the resulting multimodal sensations (vision, sound, haptics, proprioception), as well as to plan an entire task execution based on the interactively acquired information. Our study demonstrates that LLMs can provide high-level planning and reasoning skills and control interactive robot behaviour in a multimodal environment, while multimodal modules with the context of the environmental state help ground the LLMs and extend their processing ability.
翻译:在一个复杂的世界里,编程机器人行为在从极差的低水平技能到高层规划和推理等多个层面都面临挑战。最近经过预先训练的大型语言模型(LLMs)在零发射机器人规划中表现出了非凡的推理能力。然而,它仍然对将LLMs置于多式联运感官输入和连续行动输出中提出挑战,同时使机器人能够与其环境互动并随着其政策的发展获得新的信息。我们开发了一个部分可观测状态的机器人互动情景,这需要机器人在能够正确执行任务之前在多种模式中抽样检测感官信息。因此,以LLMs为主力,提出了一个互动感知框架,利用LLMs作为主干线,以指导感知性行动,并解释由此产生的多式感觉(视觉、声音、机能、动感知觉),以及根据互动获得的信息规划整个任务执行计划。我们的研究显示,LMS可以提供高层次的规划和推理技能和控制多式环境中的互动机器人行为,同时使用多式模块模块帮助LMs(LMs)的地面并扩大其处理能力。</s>