Federated learning (FL) is a machine learning technique that enables participants to collaboratively train high-quality models without exchanging their private data. Participants utilizing cross-silo FL (CS-FL) settings are independent organizations with different task needs, and they are concerned not only with data privacy but also with independently training their unique models due to intellectual property considerations. Most existing FL methods are incapable of satisfying the above scenarios. In this paper, we propose a FL method based on the pseudolabeling of unlabeled data via a process such as cotraining. To the best of our knowledge, this is the first FL method that is simultaneously compatible with heterogeneous tasks, heterogeneous models, and heterogeneous training algorithms. Experimental results show that the proposed method achieves better performance than competing ones. This is especially true for non-independent and identically distributed (IID) settings and heterogeneous models, where the proposed method achieves a 35% performance improvement.
翻译:联邦学习(FL)是一种机器学习技术,使参与者能够在不交换私人数据的情况下合作培训高质量的模型。使用跨筒式FL(CS-FL)设置的参与者是任务需求不同的独立组织,他们不仅关心数据隐私,还关心由于知识产权考虑而独立培训他们独特的模型。大多数现有的FL方法都无法满足上述情景。在本文中,我们提议一种FL方法,其依据是通过诸如共同培训等程序假贴无标签数据标签。在我们的知识中,这是第一个与不同任务、不同模型和不同培训算法同时兼容的FL方法。实验结果显示,拟议的方法比相互竞争的方法取得更好的业绩。对于非独立和同样分布的(IID)设置和多种模型尤其如此,因为拟议的方法在其中实现了35%的性能改进。