The remarkable empirical performance of distributional reinforcement learning (RL) has garnered increasing attention to understanding its theoretical advantages over classical RL. By decomposing the categorical distributional loss commonly employed in distributional RL, we find that the potential superiority of distributional RL can be attributed to a derived distribution-matching entropy regularization. This less-studied entropy regularization aims to capture additional knowledge of return distribution beyond only its expectation, contributing to an augmented reward signal in policy optimization. In contrast to the vanilla entropy regularization in MaxEnt RL, which explicitly encourages exploration by promoting diverse actions, the novel entropy regularization derived from categorical distributional loss implicitly updates policies to align the learned policy with (estimated) environmental uncertainty. Finally, extensive experiments verify the significance of this uncertainty-aware regularization from distributional RL on the empirical benefits over classical RL. Our study offers an innovative exploration perspective to explain the intrinsic benefits of distributional learning in RL.
翻译:分布强化学习(RL)卓越的实证表现,引发了对其相较于经典RL理论优势的日益关注。通过分解分布RL中常用的分类分布损失,我们发现分布RL的潜在优势可归因于一种导出的分布匹配熵正则化。这种较少被研究的熵正则化旨在捕捉回报分布除期望值之外的额外知识,从而在策略优化中贡献一个增强的奖励信号。与MaxEnt RL中显式通过促进多样化动作来鼓励探索的朴素熵正则化不同,从分类分布损失导出的新型熵正则化隐式地更新策略,以使学习到的策略与(估计的)环境不确定性对齐。最后,大量实验验证了这种来自分布RL的不确定性感知正则化,对于超越经典RL的实证优势具有重要意义。我们的研究提供了一个创新的探索视角,以解释分布学习在RL中的内在益处。