While federated learning is promising for privacy-preserving collaborative learning without revealing local data, it remains vulnerable to white-box attacks and struggles to adapt to heterogeneous clients. Federated distillation (FD), built upon knowledge distillation--an effective technique for transferring knowledge from a teacher model to student models--emerges as an alternative paradigm, which provides enhanced privacy guarantees and addresses model heterogeneity. Nevertheless, challenges arise due to variations in local data distributions and the absence of a well-trained teacher model, which leads to misleading and ambiguous knowledge sharing that significantly degrades model performance. To address these issues, this paper proposes a selective knowledge sharing mechanism for FD, termed Selective-FD. It includes client-side selectors and a server-side selector to accurately and precisely identify knowledge from local and ensemble predictions, respectively. Empirical studies, backed by theoretical insights, demonstrate that our approach enhances the generalization capabilities of the FD framework and consistently outperforms baseline methods. This study presents a promising direction for effective knowledge transfer in privacy-preserving collaborative learning.
翻译:联邦学习具有在不公开本地数据的情况下进行隐私保护协作学习的优点,但仍然容易受到白盒攻击并且难以适应异构客户端。基于知识蒸馏的联邦蒸馏(FD)作为一种替代范例出现,该范例提供了增强的隐私保证,并解决了模型异构性,能够更好地适应联邦学习的场景。 尽管如此,由于本地数据分布的差异以及缺乏良好训练的师生模型,导致的知识共享混淆和误导,进而显著降低了模型的性能。为了解决这些问题,本论文提出了一种FD的选择性知识分享机制,称为Selective-FD。它包括客户端选择器和服务器选择器,以准确精确地识别来自本地和集合预测的知识。经过理论分析和实验研究,本方法提高了FD框架的泛化能力,并始终优于基线方法。 这项研究为隐私保护协作学习中有效的知识转移提供了一个很有前景的方向。