A distinguishing characteristic of federated learning is that the (local) client data could have statistical heterogeneity. This heterogeneity has motivated the design of personalized learning, where individual (personalized) models are trained, through collaboration. There have been various personalization methods proposed in literature, with seemingly very different forms and methods ranging from use of a single global model for local regularization and model interpolation, to use of multiple global models for personalized clustering, etc. In this work, we begin with a generative framework that could potentially unify several different algorithms as well as suggest new algorithms. We apply our generative framework to personalized estimation, and connect it to the classical empirical Bayes' methodology. We develop private personalized estimation under this framework. We then use our generative framework for learning, which unifies several known personalized FL algorithms and also suggests new ones; we propose and study a new algorithm AdaPeD based on a Knowledge Distillation, which numerically outperforms several known algorithms. We also develop privacy for personalized learning methods with guarantees for user-level privacy and composition. We numerically evaluate the performance as well as the privacy for both the estimation and learning problems, demonstrating the advantages of our proposed methods.
翻译:联合会学习的一个显著特点是,(当地)客户数据可能具有统计差异性。这种差异性促使设计个性化学习,通过协作培训个人(个性化)模式。文献中提出了各种个性化方法,其形式和方法似乎非常不同,从使用单一的全球地方正规化模式和模型内插模型到使用多种全球个人化组合模型等。在这项工作中,我们首先采用一个基因化框架,有可能将几种不同的算法统一起来,并提出新的算法。我们把我们的基因化框架应用于个性化估算,并将它与古典经验海湾方法联系起来。我们在这个框架内开发了个性化个人化估算。然后我们利用我们的基因化框架进行学习,将一些已知的个性化FL算法统一起来,并提出新的方法。我们提出并研究基于知识提法的新算法的AdaPED算法,该算法在数字上优于若干已知的算法。我们还开发个人化学习方法的隐私,保证用户一级的隐私和构成。我们从数字上评估了我们的业绩,并学习了我们提出的隐私问题的方法。