Text-based games present a unique class of sequential decision making problem in which agents interact with a partially observable, simulated environment via actions and observations conveyed through natural language. Such observations typically include instructions that, in a reinforcement learning (RL) setting, can directly or indirectly guide a player towards completing reward-worthy tasks. In this work, we study the ability of RL agents to follow such instructions. We conduct experiments that show that the performance of state-of-the-art text-based game agents is largely unaffected by the presence or absence of such instructions, and that these agents are typically unable to execute tasks to completion. To further study and address the task of instruction following, we equip RL agents with an internal structured representation of natural language instructions in the form of Linear Temporal Logic (LTL), a formal language that is increasingly used for temporally extended reward specification in RL. Our framework both supports and highlights the benefit of understanding the temporal semantics of instructions and in measuring progress towards achievement of such a temporally extended behaviour. Experiments with 500+ games in TextWorld demonstrate the superior performance of our approach.
翻译:以文字为基础的游戏是一个独特的顺序决策问题类别,在这种情况下,代理商通过自然语言传递的行动和观测,与部分可见的模拟环境发生互动;这种观察通常包括指示,在强化学习(RL)环境中,可以直接或间接地指导玩家完成值得奖励的任务;在这项工作中,我们研究RL代理商遵守这种指示的能力;我们进行实验,表明最先进的基于文字的游戏代理商的表现在很大程度上不受这种指示的存在或缺乏的影响,而且这些代理商通常无法完成任务;为了进一步研究和处理随后的教学任务,我们为RL代理商配备了以Linear Temoral Lologic(LTLTL)为形式的自然语言指示的内部结构化说明;Linear Temooral Lologic(LTLT)是一种正式语言,越来越多地用于时间延伸的奖励规范。我们的框架支持和强调理解指示的时序和测量实现这种时间延伸行为的进展的好处。在文本World上500+游戏的实验显示了我们的方法的优劣表现。