Credit assignment is one of the central problems in reinforcement learning. The predominant approach is to assign credit based on the expected return. However, we show that the expected return may depend on the policy in an undesirable way which could slow down learning. Instead, we borrow ideas from the causality literature and show that the advantage function can be interpreted as causal effects, which share similar properties with causal representations. Based on this insight, we propose the Direct Advantage Estimation (DAE), a novel method that can model the advantage function and estimate it directly from data without requiring the (action-)value function. If desired, value functions can also be seamlessly integrated into DAE and be updated in a similar way to Temporal Difference Learning. The proposed method is easy to implement and can be readily adopted by modern actor-critic methods. We test DAE empirically on the Atari domain and show that it can achieve competitive results with the state-of-the-art method for advantage estimation.
翻译:信用分配是强化学习的核心问题之一。 主要的办法是根据预期回报来分配信用。 然而, 我们显示预期回报可能取决于政策, 其方式不可取, 可能会减缓学习。 相反, 我们借用因果关系文献中的想法, 并表明优势功能可以被解释为因果关系效应, 与因果关系表示具有相似的属性。 基于这一认识, 我们提出直接优势估算( DAE), 这是一种新型方法, 可以模拟优势功能, 直接根据数据来估算, 而不需要( 行动) 价值 功能。 如果需要, 价值功能也可以无缝地融入 DAE, 并以与时间差异学习相似的方式更新。 提议的方法很容易实施, 并且很容易被现代的行为者- 批评方法所采纳。 我们从经验上测试Atari 域, 并表明它可以通过最先进的优势估算方法实现竞争结果 。