Federated Reinforcement Learning (FedRL) encourages distributed agents to learn collectively from each other's experience to improve their performance without exchanging their raw trajectories. The existing work on FedRL assumes that all participating agents are homogeneous, which requires all agents to share the same policy parameterization (e.g., network architectures and training configurations). However, in real-world applications, agents are often in disagreement about the architecture and the parameters, possibly also because of disparate computational budgets. Because homogeneity is not given in practice, we introduce the problem setting of Federated Reinforcement Learning with Heterogeneous And bLack-box agEnts (FedRL-HALE). We present the unique challenges this new setting poses and propose the Federated Heterogeneous Q-Learning (FedHQL) algorithm that principally addresses these challenges. We empirically demonstrate the efficacy of FedHQL in boosting the sample efficiency of heterogeneous agents with distinct policy parameterization using standard RL tasks.
翻译:联邦强化学习联合会(FedRL)鼓励分布式代理商在不交换原始轨迹的情况下,集体学习彼此的经验,以提高其业绩。关于联邦强化学习联合会(FedRL)的现有工作假设,所有参与代理商都是同质的,要求所有代理商共享相同的政策参数化(例如网络架构和培训配置 ) 。然而,在现实应用中,代理商往往对结构和参数有分歧,这可能是不同的计算预算造成的。由于在实践中没有给出同质性,我们引入了与异质和bLack-box AgEnts(FedRL-HALE)一起的联邦强化学习联合会(FedRL-Back-back-box AgEnts)的设置问题。我们介绍了这种新设置构成的独特挑战,并提出了主要应对这些挑战的联邦异质学习(FedHQL)算法。我们从经验上证明,FDHQL在利用标准RL任务提高不同政策参数化的多元性代理商的抽样效率方面是有效的。