Artificial neural networks are promising as general function approximators but challenging to train on non-independent and identically distributed data due to catastrophic forgetting. Experience replay, a standard component in deep reinforcement learning, is often used to reduce forgetting and improve sample efficiency by storing experiences in a large buffer and using them for training later. However, a large replay buffer results in a heavy memory burden, especially for onboard and edge devices with limited memory capacities. We propose memory-efficient reinforcement learning algorithms based on the deep Q-network algorithm to alleviate this problem. Our algorithms reduce forgetting and maintain high sample efficiency by consolidating knowledge from the target Q-network to the current Q-network. Compared to baseline methods, our algorithms achieve comparable or better performance on both feature-based and image-based tasks while easing the burden of large experience replay buffers.
翻译:人工神经网络作为一般功能近似器很有希望,但是由于灾难性的遗忘,培训非独立和相同分布的数据却具有挑战性。经验重现是深强化学习的一个标准组成部分,常常用来通过将经验储存在大型缓冲中,然后将其用于培训来减少遗忘并提高样本效率。然而,大型重现缓冲使记忆负担沉重,特别是对内存能力有限的机载和边缘装置而言。我们根据深重的Q-网络算法提出记忆高效强化学习算法,以缓解这一问题。我们的算法通过将目标Q-网络的知识整合到当前的Q-网络,减少了遗忘并保持高样本效率。与基线方法相比,我们的算法在基于特性和图像的任务上都实现了可比或更好的业绩,同时减轻了大型经验重置缓冲器的负担。