A central task in control theory, artificial intelligence, and formal methods is to synthesize reward-maximizing strategies for agents that operate in partially unknown environments. In environments modeled by gray-box Markov decision processes (MDPs), the impact of the agents' actions are known in terms of successor states but not the stochastics involved. In this paper, we devise a strategy synthesis algorithm for gray-box MDPs via reinforcement learning that utilizes interval MDPs as internal model. To compete with limited sampling access in reinforcement learning, we incorporate two novel concepts into our algorithm, focusing on rapid and successful learning rather than on stochastic guarantees and optimality: lower confidence bound exploration reinforces variants of already learned practical strategies and action scoping reduces the learning action space to promising actions. We illustrate benefits of our algorithms by means of a prototypical implementation applied on examples from the AI and formal methods communities.
翻译:受限采样访问下的马尔科夫决策过程中的策略合成
翻译后的摘要:
控制理论、人工智能和形式方法中的一个核心任务是为在部分未知环境中运行的代理合成最大化奖励的策略。在用灰箱马尔可夫决策过程(MDPs)建模的环境中,代理的行动影响以继任状态的形式已知,但其中的随机性却未知。在本文中,我们通过强化学习为灰箱MDPs设计了一个策略合成算法,该算法利用内部模型的区间MDPs。我们将两个新概念纳入我们的算法,以应对强化学习中的有限采样访问,侧重于快速和成功的学习,而不是概率保证和最优性:更低的置信度界限探索加强已经学习到的实用策略的变体,而动作范围缩小将学习动作空间缩小到有前途的动作。我们通过来自人工智能和形式方法社区的示例的原型实现说明了我们算法的好处。