Federated learning is a contemporary machine learning paradigm where locally trained models are distilled into a global model. Due to the intrinsic permutation invariance of neural networks, Probabilistic Federated Neural Matching (PFNM) employs a Bayesian nonparametric framework in the generation process of local neurons, and then creates a linear sum assignment formulation in each alternative optimization iteration. But according to our theoretical analysis, the optimization iteration in PFNM omits global information from existing. In this study, we propose a novel approach that overcomes this flaw by introducing a Kullback-Leibler divergence penalty at each iteration. The effectiveness of our approach is demonstrated by experiments on both image classification and semantic segmentation tasks.
翻译:联邦学习是一种现代机器学习模式,在这种模式中,当地培训的模型被提炼成一种全球模型。由于神经网络内在的变异性,在生成当地神经元的过程中,联邦神经匹配(PFNM)采用贝氏非对称框架,然后在每种替代优化迭代中建立线性总和分配配方。但根据我们的理论分析,PFNM的优化迭代从现有的全球信息中省略了。在本研究中,我们提出了一个新颖的方法,通过在每一次迭代中引入库尔后背-利伯尔差异处罚来克服这一缺陷。关于图像分类和语义分割任务的实验证明了我们的方法的有效性。