The increasingly stringent regulations on privacy protection have sparked interest in federated learning. As a distributed machine learning framework, it bridges isolated data islands by training a global model over devices while keeping data localized. Specific to recommendation systems, many federated recommendation algorithms have been proposed to realize the privacy-preserving collaborative recommendation. However, several constraints remain largely unexplored. One big concern is how to ensure fairness between participants of federated learning, that is, to maintain the uniformity of recommendation performance across devices. On the other hand, due to data heterogeneity and limited networks, additional challenges occur in the convergence speed. To address these problems, in this paper, we first propose a personalized federated recommendation system training algorithm to improve the recommendation performance fairness. Then we adopt a clustering-based aggregation method to accelerate the training process. Combining the two components, we proposed Cali3F, a calibrated fast and fair federated recommendation framework. Cali3F not only addresses the convergence problem by a within-cluster parameter sharing approach but also significantly boosts fairness by calibrating local models with the global model. We demonstrate the performance of Cali3F across standard benchmark datasets and explore the efficacy in comparison to traditional aggregation approaches.
翻译:关于隐私保护的日益严格的规章引起了对联合会学习的兴趣。作为一个分布式的机器学习框架,它通过在保持数据本地化的同时对设备进行全球模型培训,将孤立的数据岛屿连接起来。针对建议系统,提出了许多联合建议算法,以落实隐私保护协作建议。然而,一些制约因素基本上尚未探讨。一个大的问题是如何确保联合会学习参与者之间的公平,即保持各设备之间建议性能的统一。另一方面,由于数据差异性和网络有限,在趋同速度方面出现了更多的挑战。为了解决这些问题,我们在本文中首先提出了个性化的联邦建议系统培训算法,以提高建议性公平性。然后我们采用了基于集群的汇总方法,以加快培训进程。我们提出了两个组成部分,即Cali3F,一个经过校准的快速和公平联邦化建议框架。Cali3F不仅通过组合内参数共享方法解决了趋同问题,而且还通过校准地方模型来大大增强公平性。我们展示了Cali3F在标准基准方法与标准数据集比度方面的业绩。