We present an approach to reduce the communication of information needed on a Distributed Q-Learning system inspired by Event Triggered Control (ETC) techniques. We consider a baseline scenario of a distributed Q-learning problem on a Markov Decision Process (MDP). Following an event-based approach, N agents explore the MDP and communicate experiences to a central learner only when necessary, which performs updates of the actor Q functions. We design an Event Based distributed Q learning system (EBd-Q), and derive convergence guarantees with respect to a vanilla Q-learning algorithm. We present experimental results showing that event-based communication results in a substantial reduction of data transmission rates in such distributed systems. Additionally, we discuss what effects (desired and undesired) these event-based approaches have on the learning processes studied, and how they can be applied to more complex multi-agent systems.
翻译:我们提出了一个办法,以减少根据事件触发控制(ETC)技术所启发的分布式Q-学习系统所需的信息交流;我们考虑Markov决策程序(MDP)中分布式Q-学习问题的基线设想;按照以事件为基础的方法,N代理商探索MDP,并仅在必要的时候向中央学习者传授经验,该学习者对行为者Q功能进行更新;我们设计了一个基于事件的分配式Q-Q学习系统(EBd-Q),并获得关于香草Q-学习算法的趋同保证;我们提出实验结果,表明基于事件的通信在这种分布式系统中导致数据传输率的大幅下降;此外,我们讨论了这些基于事件的办法对所研究的学习过程产生哪些(理想和不理想)影响,以及如何将其应用于更为复杂的多剂系统。