Multi-Agent Reinforcement Learning (MARL) is a challenging subarea of Reinforcement Learning due to the non-stationarity of the environments and the large dimensionality of the combined action space. Deep MARL algorithms have been applied to solve different task offloading problems. However, in real-world applications, information required by the agents (i.e. rewards and states) are subject to noise and alterations. The stability and the robustness of deep MARL to practical challenges is still an open research problem. In this work, we apply state-of-the art MARL algorithms to solve task offloading with reward uncertainty. We show that perturbations in the reward signal can induce decrease in the performance compared to learning with perfect rewards. We expect this paper to stimulate more research in studying and addressing the practical challenges of deploying deep MARL solutions in wireless communications systems.
翻译:多机构强化学习(MARL)是强化学习的一个具有挑战性的子领域,因为环境不固定,而且联合行动空间具有巨大的维度。深MARL算法已经用于解决不同的任务卸载问题。然而,在现实应用中,代理商(即奖励和国家)需要的信息受到噪音和变化的影响。深层MARL对实际挑战的稳定性和稳健性仍然是一个开放的研究问题。在这项工作中,我们运用最先进的MARL算法来解决有报酬不确定性的卸载任务。我们表明,与学习完美回报相比,奖励信号的干扰可能会导致业绩下降。我们期望这份文件能激发更多的研究,研究如何应对在无线通信系统中部署深层MARL解决方案的实际挑战。