Natural and formal languages provide an effective mechanism for humans to specify instructions and reward functions. We investigate how to generate policies via RL when reward functions are specified in a symbolic language captured by Reward Machines, an increasingly popular automaton-inspired structure. We are interested in the case where the mapping of environment state to a symbolic (here, Reward Machine) vocabulary -- commonly known as the labelling function -- is uncertain from the perspective of the agent. We formulate the problem of policy learning in Reward Machines with noisy symbolic abstractions as a special class of POMDP optimization problem, and investigate several methods to address the problem, building on existing and new techniques, the latter focused on predicting Reward Machine state, rather than on grounding of individual symbols. We analyze these methods and evaluate them experimentally under varying degrees of uncertainty in the correct interpretation of the symbolic vocabulary. We verify the strength of our approach and the limitation of existing methods via an empirical investigation on both illustrative, toy domains and partially observable, deep RL domains.
翻译:自然和正式语言为人类提供了一种有效的机制,以具体规定指示和奖励功能。当奖励功能以奖赏机器(一种日益流行的自动制导结构)所捕捉的象征性语言指定时,我们调查如何通过奖赏实验室制定政策。我们感兴趣的是,从代理人的角度来看,环境状态映射为象征性(这里称为奖赏机器)词汇(通常称为标签功能)的不确定性,从代理人的角度来看,我们把具有噪音的象征性抽象抽象元素的奖赏机器的政策学习问题作为POMDP优化问题的特殊类别,并调查解决这一问题的几种方法,以现有和新的技术为基础,后者侧重于预测奖赏机器状态,而不是以单个符号为根据。我们对这些方法进行分析,并在对符号词汇的正确解释中以不同程度的不确定性进行实验性评估。我们通过对说明性、毒性领域和部分可见的深度RL领域进行实验性调查,来验证我们的方法的力度和现有方法的局限性。