Off-policy reinforcement learning (RL) from pixel observations is notoriously unstable. As a result, many successful algorithms must combine different domain-specific practices and auxiliary losses to learn meaningful behaviors in complex environments. In this work, we provide novel analysis demonstrating that these instabilities arise from performing temporal-difference learning with a convolutional encoder and low-magnitude rewards. We show that this new visual deadly triad causes unstable training and premature convergence to degenerate solutions, a phenomenon we name catastrophic self-overfitting. Based on our analysis, we propose A-LIX, a method providing adaptive regularization to the encoder's gradients that explicitly prevents the occurrence of catastrophic self-overfitting using a dual objective. By applying A-LIX, we significantly outperform the prior state-of-the-art on the DeepMind Control and Atari 100k benchmarks without any data augmentation or auxiliary losses.
翻译:从像素观测中产生的非政策强化学习(RL)臭名昭著的不稳定性。 因此,许多成功的算法必须结合不同的特定领域做法和辅助损失,以学习复杂环境中有意义的行为。 在这项工作中,我们提供了新颖的分析,表明这些不稳定性产生于与一个共振编码器和低微分微奖赏进行的时间差异学习。我们表明,这一新可见致命的三合体造成了不稳定的培训和过早地融合到退化的解决方案,我们称之为灾难性的自我调整现象。根据我们的分析,我们提出了A-LIX,这是为编码器梯度提供适应性规范的方法,明确防止使用双重目标进行灾难性的自我调整。通过应用A-LIX,我们大大超越了DeepMind控制和Atari 100k基准,而没有任何数据增强或辅助损失。