Inter-agent communication can significantly increase performance in multi-agent tasks that require co-ordination to achieve a shared goal. Prior work has shown that it is possible to learn inter-agent communication protocols using multi-agent reinforcement learning and message-passing network architectures. However, these models use an unconstrained broadcast communication model, in which an agent communicates with all other agents at every step, even when the task does not require it. In real-world applications, where communication may be limited by system constraints like bandwidth, power and network capacity, one might need to reduce the number of messages that are sent. In this work, we explore a simple method of minimizing communication while maximizing performance in multi-task learning: simultaneously optimizing a task-specific objective and a communication penalty. We show that the objectives can be optimized using Reinforce and the Gumbel-Softmax reparameterization. We introduce two techniques to stabilize training: 50% training and message forwarding. Training with the communication penalty on only 50% of the episodes prevents our models from turning off their outgoing messages. Second, repeating messages received previously helps models retain information, and further improves performance. With these techniques, we show that we can reduce communication by 75% with no loss of performance.
翻译:试剂间通信可以大大提高多试剂任务的业绩,这些任务需要协调才能达到一个共同目标。先前的工作表明,有可能利用多试剂强化学习和传递信息网络结构学习机构间通信协议,但这些模式使用不受限制的广播通信模式,即代理每一步都与所有其他代理进行通信,即使任务并不需要。在现实世界应用中,通信可能受到带宽、电力和网络能力等系统制约的限制,因此可能需要减少发送的信息数量。在这项工作中,我们探索一种简单的方法,在多任务学习中最大限度地减少通信,同时最大限度地提高通信绩效:同时优化特定任务目标和通信处罚。我们表明,可以使用“加强”和“甘贝尔-软体”再校准来优化目标。我们引入了两种稳定培训的技术:50%的培训和信息传递。仅对50%的事故进行通信处罚,使得我们的模型无法关闭发送的信息。第二,重复以前收到的信息有助于模型保留信息,并且进一步改进性能。我们通过这些技术来显示,我们可以通过75 %的绩效来减少通信。