We present in this work an approach to reduce the communication of information needed on a multi-agent learning system inspired by Event Triggered Control (ETC) techniques. We consider a baseline scenario of a distributed Q-learning problem on a Markov Decision Process (MDP). Following an event-based approach, N agents explore the MDP and communicate experiences to a central learner only when necessary, which performs updates of the actor Q functions. We analyse the convergence guarantees retained with respect to a regular Q-learning algorithm, and present experimental results showing that event-based communication results in a substantial reduction of data transmission rates in such distributed systems. Additionally, we discuss what effects (desired and undesired) these event-based approaches have on the learning processes studied, and how they can be applied to more complex multi-agent learning systems.
翻译:在这项工作中,我们提出了一个减少多试剂学习系统所需信息交流的方法,这个系统受事件触发控制(ETC)技术的启发,我们考虑了在Markov决策程序中分布式Q学习问题的基线设想。按照以事件为基础的方法,N代理商探索MDP,并只在必要时向中央学习者介绍经验,该学习者负责更新行为者Q的功能。我们分析了在定期Q学习算法方面保留的共同保证,并提出了实验结果,表明基于事件的通信在大幅度降低这种分布式系统中的数据传输率方面产生了结果。此外,我们讨论了这些基于事件的方法对所研究的学习过程产生了哪些(希望的和不希望的)影响,以及如何将其应用于更为复杂的多剂学习系统。