Data heterogeneity across participating devices poses one of the main challenges in federated learning as it has been shown to greatly hamper its convergence time and generalization capabilities. In this work, we address this limitation by enabling personalization using multiple user-centric aggregation rules at the parameter server. Our approach potentially produces a personalized model for each user at the cost of some extra downlink communication overhead. To strike a trade-off between personalization and communication efficiency, we propose a broadcast protocol that limits the number of personalized streams while retaining the essential advantages of our learning scheme. Through simulation results, our approach is shown to enjoy higher personalization capabilities, faster convergence, and better communication efficiency compared to other competing baseline solutions.
翻译:各参与装置的数据差异性是联结学习的主要挑战之一,因为事实证明,这极大地妨碍了其趋同时间和普及能力。在这项工作中,我们通过在参数服务器上使用多用户为中心的聚合规则,使个人化,从而解决这一限制。我们的方法有可能为每个用户产生一个个性化模式,以一些额外的下行通讯管理费用为代价。为了在个性化与通信效率之间达成平衡,我们提议了一个广播协议,限制个性化流的数量,同时保留我们学习计划的基本优势。通过模拟结果,我们的方法显示,与其他相互竞争的基线解决方案相比,我们拥有更高的个性化能力、更快的趋同和更好的通信效率。