Intermittent client connectivity is one of the major challenges in centralized federated edge learning frameworks. Intermittently failing uplinks to the central parameter server (PS) can induce a large generalization gap in performance especially when the data distribution among the clients exhibits heterogeneity. In this work, to mitigate communication blockages between clients and the central PS, we introduce the concept of knowledge relaying wherein the successfully participating clients collaborate in relaying their neighbors' local updates to a central parameter server (PS) in order to boost the participation of clients with intermittently failing connectivity. We propose a collaborative relaying based semi-decentralized federated edge learning framework where at every communication round each client first computes a local consensus of the updates from its neighboring clients and eventually transmits a weighted average of its own update and those of its neighbors to the PS. We appropriately optimize these averaging weights to reduce the variance of the global update at the PS while ensuring that the global update is unbiased, consequently improving the convergence rate. Finally, by conducting experiments on CIFAR-10 dataset we validate our theoretical results and demonstrate that our proposed scheme is superior to Federated averaging benchmark especially when data distribution among clients is non-iid.
翻译:客户间断的连接是中央联合边际学习框架的主要挑战之一。 与中央参数服务器(PS)断断续续的连接有时会引发业绩方面的大普遍差距, 特别是在客户间的数据分布表现出差异性的情况下。 在这项工作中, 我们引入了知识传递概念, 使成功参与的客户能够合作将邻居的本地更新传送到中央参数服务器(PS), 以便提高间歇性连接中断的客户的参与率。 我们建议合作转发基于半分散化边际连接的边际学习框架, 在每个通信回合中,每个客户首先对邻居客户提供的最新信息形成当地共识, 并最终将自身和邻居更新的加权平均数传送到PS。 我们适当优化这些平均加权权重, 以减少全球更新在PS上的差异, 同时确保全球更新是公正的, 从而改善合并率。 最后, 通过在CIFAR- 10数据集进行实验, 我们验证了我们的理论结果, 并表明我们提议的计划优于平均数据分配, 特别是当我们的客户间数据是非基准时, 我们的拟议计划是高级的。