The distributional reinforcement learning (RL) approach advocates for representing the complete probability distribution of the random return instead of only modelling its expectation. A distributional RL algorithm may be characterised by two main components, namely the representation of the distribution together with its parameterisation and the probability metric defining the loss. The present research work considers the unconstrained monotonic neural network (UMNN) architecture, a universal approximator of continuous monotonic functions which is particularly well suited for modelling different representations of a distribution (PDF, CDF, QF). This property enables the efficient decoupling of the effect of the function approximator class from that of the probability metric. The research paper firstly introduces a methodology for learning different representations of the random return distribution. Secondly, a novel distributional RL algorithm named unconstrained monotonic deep Q-network (UMDQN) is presented. Lastly, in light of this new algorithm, an empirical comparison is performed between three probability quasimetrics, namely the Kullback-Leibler divergence, Cramer distance, and Wasserstein distance. The results highlight the main strengths and weaknesses associated with each probability metric together with an important limitation of the Wasserstein distance. This research concludes by calling for a reconsideration of all probability metrics in distributional RL, contrasting with the clear dominance of the Wasserstein distance in recent publications.
翻译:分配强化学习(RL) 方法主张代表随机返回的完全概率分布,而不是仅仅模拟其预期。 分配RL算法可以用两个主要组成部分来定性, 即分布的表示及其参数化和确定损失的概率度量。 目前的研究工作考虑了不受限制的单调神经网络(UMNN)架构, 即一个通用的连续单调函数匹配器, 它特别适合于模拟分布的不同表示( PDF、 CDF、 QF) 。 这一属性使得功能相近类的功能与概率度测量的对比有效脱钩。 研究论文首先介绍了一种方法, 学习随机返回分布的不同表达方式。 第二, 介绍了一个名为不受限制的单调深度Q- 网络(UMDQN) 的新的分发RL 算法。 最后, 根据这一新算法, 实证比较是在三个概率准之间进行的, 即 Kullback- Leiper 差异、 Cramer距离和 Wasierrstein 距离的函数效果。 这个结果显示的是, 与每个VAL 标准分布中的重要的比力, 以及每个VAL 度的比值, 的比值 。