In recent years, Deep Reinforcement Learning (DRL) algorithms have achieved state-of-the-art performance in many challenging strategy games. Because these games have complicated rules, an action sampled from the full discrete action distribution predicted by the learned policy is likely to be invalid according to the game rules (e.g., walking into a wall). The usual approach to deal with this problem in policy gradient algorithms is to "mask out" invalid actions and just sample from the set of valid actions. The implications of this process, however, remain under-investigated. In this paper, we 1) show theoretical justification for such a practice, 2) empirically demonstrate its importance as the space of invalid actions grows, and 3) provide further insights by evaluating different action masking regimes, such as removing masking after an agent has been trained using masking. The source code can be found at https://github.com/vwxyzjn/invalid-action-masking
翻译:近年来,深强化学习(DRL)算法在许多富有挑战性的战略游戏中取得了最先进的表现。由于这些游戏有复杂的规则,根据所学政策预测的全面独立行动分布的抽样行动根据游戏规则(例如走进墙壁)可能无效。政策梯度算法中解决这一问题的通常方法是“排除”无效行为,从一系列有效行动中只是抽样。但是,这一过程的影响仍然调查不足。在本文中,我们1 显示了这种做法的理论理由。2 经验证明,随着无效行动空间的扩大,它具有重要性,3 提供了进一步见解,通过评估不同的行动掩蔽制度,例如,在用面具训练了一名代理人之后去除遮罩。源代码可在https://github.com/vwxyzjn/invalid-action-masking查阅 https://github.com/vwxyjn/invalid-action-making中找到。