Personalization in federated learning (FL) functions as a coordinator for clients with high variance in data or behavior. Ensuring the convergence of these clients' models relies on how closely users collaborate with those with similar patterns or preferences. However, it is generally challenging to quantify similarity under limited knowledge about other users' models given to users in a decentralized network. To cope with this issue, we propose a personalized and fully decentralized FL algorithm, leveraging knowledge distillation techniques to empower each device so as to discern statistical distances between local models. Each client device can enhance its performance without sharing local data by estimating the similarity between two intermediate outputs from feeding local samples as in knowledge distillation. Our empirical studies demonstrate that the proposed algorithm improves the test accuracy of clients in fewer iterations under highly non-independent and identically distributed (non-i.i.d.) data distributions and is beneficial to agents with small datasets, even without the need for a central server.
翻译:联邦学习(FL)中的个性化功能,是数据或行为差异很大的客户的协调人。确保这些客户模式的趋同取决于用户与具有类似模式或偏好的人之间如何密切合作。然而,一般而言,在对分散化网络中其他用户模式的有限知识下,要量化向分散化网络的用户提供的其他用户模式的相似性是具有挑战性的。为了解决这一问题,我们建议采用个性化和完全分散的FL算法,利用知识蒸馏技术增强每个设备的能力,以便辨别地方模型之间的统计距离。每个客户装置都可以通过估计当地样本中两种中间产出与知识蒸馏的相似性,而不必使用中央服务器,从而提高本地数据的性能。我们的经验研究表明,拟议的算法提高了在高度不依赖和同样分布(非i.i.d.)数据分布下较少的客户的测试准确性,并且对拥有小数据集的代理人有利,即使不需要中央服务器。