We study the multi-step off-policy learning approach to distributional RL. Despite the apparent similarity between value-based RL and distributional RL, our study reveals intriguing and fundamental differences between the two cases in the multi-step setting. We identify a novel notion of path-dependent distributional TD error, which is indispensable for principled multi-step distributional RL. The distinction from the value-based case bears important implications on concepts such as backward-view algorithms. Our work provides the first theoretical guarantees on multi-step off-policy distributional RL algorithms, including results that apply to the small number of existing approaches to multi-step distributional RL. In addition, we derive a novel algorithm, Quantile Regression-Retrace, which leads to a deep RL agent QR-DQN-Retrace that shows empirical improvements over QR-DQN on the Atari-57 benchmark. Collectively, we shed light on how unique challenges in multi-step distributional RL can be addressed both in theory and practice.
翻译:尽管基于价值的RL和分布式RL之间存在明显的相似性,但我们的研究揭示了在多步制环境中这两种情况之间的令人感兴趣和根本差异。我们发现了一种基于路径的分布式TD错误的新概念,这是原则性多步分布式RL所不可或缺的。与基于价值的案例的区别对后视算法等概念具有重要影响。我们的工作为多步的脱离政策分配式RL算法提供了第一个理论保障,包括适用于少数现有多步分配式RL方法的结果。此外,我们得出了一种新的算法,即量化递减-递增-递增算法,它导致一种深度的RL代理 QR-DQN-Retrace QRL QR-DQQN-Retrace,它显示了在Atari-57基准上的QR-DQN的经验改进。我们共同揭示了多步式分配式分配式RL在理论和实践上如何应对独特挑战。