Deep Q-learning Network (DQN) is a successful way which combines reinforcement learning with deep neural networks and leads to a widespread application of reinforcement learning. One challenging problem when applying DQN or other reinforcement learning algorithms to real world problem is data collection. Therefore, how to improve data efficiency is one of the most important problems in the research of reinforcement learning. In this paper, we propose a framework which uses the Max-Mean loss in Deep Q-Network (M$^2$DQN). Instead of sampling one batch of experiences in the training step, we sample several batches from the experience replay and update the parameters such that the maximum TD-error of these batches is minimized. The proposed method can be combined with most of existing techniques of DQN algorithm by replacing the loss function. We verify the effectiveness of this framework with one of the most widely used techniques, Double DQN (DDQN), in several gym games. The results show that our method leads to a substantial improvement in both the learning speed and performance.
翻译:深Q学习网络(DQN)是一个成功的方法,它把强化学习与深神经网络相结合,并导致广泛应用强化学习。在将DQN或其他强化学习算法应用到现实世界问题时,一个具有挑战性的问题就是数据收集。因此,如何提高数据效率是强化学习研究中最重要的问题之一。在本文中,我们提出了一个框架,在深QNetwork(M$$2$DQN)中使用麦克斯-梅恩损失(M$2$DQN)。我们没有在培训步骤中抽取一批经验,而是从经验中抽取了几批,并更新了参数,以便最大限度地减少这些批的TD-error。拟议方法可以通过替换损失功能,与现有的大多数DQN算法相结合。我们在若干健身游戏中用最广泛使用的技术之一,即双DQN(DQN)来核查这一框架的有效性。结果显示,我们的方法导致学习速度和性能的大幅改进。