In the context of personalized federated learning (FL), the critical challenge is to balance local model improvement and global model tuning when the personal and global objectives may not be exactly aligned. Inspired by Bayesian hierarchical models, we develop a self-aware personalized FL method where each client can automatically balance the training of its local personal model and the global model that implicitly contributes to other clients' training. Such a balance is derived from the inter-client and intra-client uncertainty quantification. A larger inter-client variation implies more personalization is needed. Correspondingly, our method uses uncertainty-driven local training steps and aggregation rule instead of conventional local fine-tuning and sample size-based aggregation. With experimental studies on synthetic data, Amazon Alexa audio data, and public datasets such as MNIST, FEMNIST, CIFAR10, and Sent140, we show that our proposed method can achieve significantly improved personalization performance compared with the existing counterparts.
翻译:在个人化的联邦学习(FL)方面,关键的挑战在于平衡地方模式的改进和在个人和全球目标可能不完全一致的情况下进行全球模式的调整。在Bayesian等级模型的启发下,我们开发了一种自觉的个人化FL方法,使每个客户能够自动平衡其当地个人模式的培训和间接促进其他客户培训的全球模式。这种平衡来自客户之间和客户内部不确定性的量化。更大的客户间差异意味着需要更大的个性化。相应的是,我们的方法使用由不确定性驱动的地方培训步骤和集成规则,而不是传统的当地微调和抽样规模汇总。通过对合成数据、亚马孙亚历山大音频数据以及诸如MNIST、FEMNIST、CIFAR10和Sent140等公共数据集的实验研究,我们表明,与现有的对应方相比,我们拟议的方法可以大大改进个人化表现。