We present a semi-decentralized federated learning algorithm wherein clients collaborate by relaying their neighbors' local updates to a central parameter server (PS). At every communication round to the PS, each client computes a local consensus of the updates from its neighboring clients and eventually transmits a weighted average of its own update and those of its neighbors to the PS. We appropriately optimize these averaging weights to ensure that the global update at the PS is unbiased and to reduce the variance of the global update at the PS, consequently improving the rate of convergence. Numerical simulations substantiate our theoretical claims and demonstrate settings with intermittent connectivity between the clients and the PS, where our proposed algorithm shows an improved convergence rate and accuracy in comparison with the federated averaging algorithm.
翻译:我们提出了一个半分散化的联邦学习算法,客户可以通过向中央参数服务器(PS)转发邻居的当地更新信息进行合作。 在每轮通信中,每个客户都计算邻居客户对最新信息在当地的共识,并最终向PS传输自己及其邻居更新信息的加权平均数。 我们适当地优化这些平均加权加权加权数,以确保PS的全球更新信息是不偏不倚的,并减少PS全球更新数据的差异,从而改善趋同率。 数字模拟证实了我们的理论主张,并以客户与PS间断连接的方式展示了各种环境,我们提议的算法显示,与联邦平均算法相比,趋同率和准确度有所提高。