Federated Learning is an emerging privacy-preserving distributed machine learning approach to building a shared model by performing distributed training locally on participating devices (clients) and aggregating the local models into a global one. As this approach prevents data collection and aggregation, it helps in reducing associated privacy risks to a great extent. However, the data samples across all participating clients are usually not independent and identically distributed (non-iid), and Out of Distribution(OOD) generalization for the learned models can be poor. Besides this challenge, federated learning also remains vulnerable to various attacks on security wherein a few malicious participating entities work towards inserting backdoors, degrading the generated aggregated model as well as inferring the data owned by participating entities. In this paper, we propose an approach for learning invariant (causal) features common to all participating clients in a federated learning setup and analyze empirically how it enhances the Out of Distribution (OOD) accuracy as well as the privacy of the final learned model.
翻译:联邦学习组织是一个新出现的保护隐私的分布式机器学习方法,通过在当地进行关于参与装置(客户)的分布式培训,将当地模式合并成一个全球性模式,从而建立一个共同模式,从而建立共同模式。由于这种方法妨碍数据收集和汇总,因此在很大程度上有助于减少相关的隐私风险。然而,所有参与客户的数据样本通常不独立,分布相同(非二d),对学习模式的分布式(OOOD)一般化可能较差。除了这一挑战外,联邦学习组织还容易受到各种安全攻击的伤害,这些攻击使几个恶意参与实体努力插入后门,降低生成的综合模型的人格,并推断参与实体拥有的数据。我们在本文件中提出了一种方法,用于学习所有参与用户在联合学习设置中共有的变式(因果)特征,并用经验分析如何提高分配外(OODD)准确性以及最后学习模式的隐私。