Text-based games (TBG) have emerged as promising environments for driving research in grounded language understanding and studying problems like generalization and sample efficiency. Several deep reinforcement learning (RL) methods with varying architectures and learning schemes have been proposed for TBGs. However, these methods fail to generalize efficiently, especially under distributional shifts. In a departure from deep RL approaches, in this paper, we propose a general method inspired by case-based reasoning to train agents and generalize out of the training distribution. The case-based reasoner collects instances of positive experiences from the agent's interaction with the world in the past and later reuses the collected experiences to act efficiently. The method can be applied in conjunction with any existing on-policy neural agent in the literature for TBGs. Our experiments show that the proposed approach consistently improves existing methods, obtains good out-of-distribution generalization, and achieves new state-of-the-art results on widely used environments.
翻译:以文字为基础的游戏(TBG)已成为推动基于基础语言理解的研究和研究一般化和抽样效率等问题的充满希望的环境,为TBG提出了几种具有不同架构和学习计划的深度强化学习方法(RL),但是,这些方法未能有效地加以推广,特别是在分布式转变的情况下。我们从深层次的RL方法出发,在本文中提出了一个受基于案例的推理启发的一般方法,用于培训代理人员,并全面推广培训分布。基于案例的推理者收集了代理人过去和以后与世界互动的积极经验实例,重新利用收集到的经验,以便有效地采取行动。这一方法可以与TBG文献中现有的任何政策神经剂一起应用。我们的实验表明,拟议的方法始终在改进现有方法,获得良好的分布式概括,并在广泛使用的环境中取得新的最新成果。