This paper aims to mitigate straggler effects in synchronous distributed learning for multi-agent reinforcement learning (MARL) problems. Stragglers arise frequently in a distributed learning system, due to the existence of various system disturbances such as slow-downs or failures of compute nodes and communication bottlenecks. To resolve this issue, we propose a coded distributed learning framework, which speeds up the training of MARL algorithms in the presence of stragglers, while maintaining the same accuracy as the centralized approach. As an illustration, a coded distributed version of the multi-agent deep deterministic policy gradient(MADDPG) algorithm is developed and evaluated. Different coding schemes, including maximum distance separable (MDS)code, random sparse code, replication-based code, and regular low density parity check (LDPC) code are also investigated. Simulations in several multi-robot problems demonstrate the promising performance of the proposed framework.
翻译:本文旨在减轻在多试剂强化学习(MARL)问题同步分布式学习中产生的分流效应。由于存在各种系统干扰,例如计算节点和通信瓶颈的减速或失败等,斯特拉格勒经常出现在分布式学习系统中。为了解决这个问题,我们提议了一个编码式分布式学习框架,加速在分流者在场的情况下对MARL算法的培训,同时保持与集中式方法相同的精确度。例如,开发并评价了多试剂深度确定性政策梯度算法的编码式分布式版本。不同的编码方案,包括最大距离分解码、随机稀释代码、复制代码和定期低密度对等检查(LDPC)代码也得到了调查。多个多色调问题的模拟显示了拟议框架的前景。