Credit assignment in reinforcement learning is the problem of measuring an action's influence on future rewards. In particular, this requires separating skill from luck, i.e. disentangling the effect of an action on rewards from that of external factors and subsequent actions. To achieve this, we adapt the notion of counterfactuals from causality theory to a model-free RL setup. The key idea is to condition value functions on future events, by learning to extract relevant information from a trajectory. We formulate a family of policy gradient algorithms that use these future-conditional value functions as baselines or critics, and show that they are provably low variance. To avoid the potential bias from conditioning on future information, we constrain the hindsight information to not contain information about the agent's actions. We demonstrate the efficacy and validity of our algorithm on a number of illustrative and challenging problems.
翻译:强化学习中的信用分配是衡量一项行动对未来回报的影响的问题。 特别是,这要求将技能与运气区分开来, 也就是说, 将一项行动对奖励的影响与外部因素和随后行动的影响区分开来。 为了实现这一目标, 我们将反因果理论的概念从因果关系理论改成没有模型的RL设置。 关键的想法是通过学习从轨迹中提取相关信息, 将价值功能以未来事件为条件。 我们形成了一套政策梯度算法, 使用这些未来条件值函数作为基线或批评者, 并表明它们可能存在低差异。 为避免对未来信息进行调控的潜在偏差, 我们限制后视信息不包含有关代理人行为的信息。 我们在若干说明性和挑战性问题上展示了我们的算法的有效性和有效性。