A computing cluster that interconnects multiple compute nodes is used to accelerate distributed reinforcement learning based on DQN (Deep Q-Network). In distributed reinforcement learning, Actor nodes acquire experiences by interacting with a given environment and a Learner node optimizes their DQN model. Since data transfer between Actor and Learner nodes increases depending on the number of Actor nodes and their experience size, communication overhead between them is one of major performance bottlenecks. In this paper, their communication is accelerated by DPDK-based network optimizations, and DPDK-based low-latency experience replay memory server is deployed between Actor and Learner nodes interconnected with a 40GbE (40Gbit Ethernet) network. Evaluation results show that, as a network optimization technique, kernel bypassing by DPDK reduces network access latencies to a shared memory server by 32.7% to 58.9%. As another network optimization technique, an in-network experience replay memory server between Actor and Learner nodes reduces access latencies to the experience replay memory by 11.7% to 28.1% and communication latencies for prioritized experience sampling by 21.9% to 29.1%.
翻译:在分布式强化学习中,Actor节点通过与特定环境和学习者节点进行互动而获取经验,优化其DQN模式。由于Actor和学习者节点之间的数据传输取决于Actor和学习者节点的数量及其经验大小而增加,它们之间的通信间接费用是主要的性能瓶颈之一。在本文中,它们通过基于 DPDK 的网络优化,加快了通信速度,而基于 DPDK 的低相对经验重放存储服务器在与40GBE (40Gbit Ethernet) 网络相连接的Actor和Lainser节点之间部署。评价结果显示,作为一种网络优化技术,DPDK 绕过内圈将网络访问共享存储服务器的延迟减少32.7%至58.9%。作为另一种网络优化技术,Actor 和 Leander noddes 之间的网络经验回放存储服务器将访问延迟时间减少11.7%至28.1%,通信延迟至29.9%,通过优先取样减少通信。