Federated learning (FL) faces three major difficulties: cross-domain, heterogeneous models, and non-i.i.d. labels scenarios. Existing FL methods fail to handle the above three constraints at the same time, and the level of privacy protection needs to be lowered (e.g., the model architecture and data category distribution can be shared). In this work, we propose the challenging "completely heterogeneous" scenario in FL, which refers to that each client will not expose any private information including feature space, model architecture, and label distribution. We then devise an FL framework based on parameter decoupling and data-free knowledge distillation to solve the problem. Experiments show that our proposed method achieves high performance in completely heterogeneous scenarios where other approaches fail.
翻译:联邦学习(FL)面临三大困难:跨域、多种模式和非i.d.标签设想方案。现有的FL方法无法同时处理上述三个制约因素,需要降低隐私保护水平(例如模型架构和数据分类分布可以共享 ) 。 在这项工作中,我们提出了FL中具有挑战性的“完全多样化”设想方案,即每个客户不会暴露任何私人信息,包括地物空间、模型架构和标签分布。然后,我们根据分解参数和无数据知识蒸馏来设计一个FL框架,以解决问题。实验表明,在其他方法失败的情况下,我们提出的方法在完全多样化的情景下取得了高绩效。