Federated learning has received attention for its efficiency and privacy benefits, in settings where data is distributed among devices. Although federated learning shows significant promise as a key approach when data cannot be shared or centralized, current incarnations show limited privacy properties and have shortcomings when applied to common real-world scenarios. One such scenario is heterogeneous data among devices, where data may come from different generating distributions. In this paper, we propose a federated learning framework using a mixture of experts to balance the specialist nature of a locally trained model with the generalist knowledge of a global model in a federated learning setting. Our results show that the mixture of experts model is better suited as a personalized model for devices when data is heterogeneous, outperforming both global and local models. Furthermore, our framework gives strict privacy guarantees, which allows clients to select parts of their data that may be excluded from the federation. The evaluation shows that the proposed solution is robust to the setting where some users require a strict privacy setting and do not disclose their models to a central server at all, opting out from the federation partially or entirely. The proposed framework is general enough to include any kind of machine learning models, and can even use combinations of different kinds.
翻译:在各种设备之间分配数据的环境中,联邦学习因其效率和隐私利益而得到注意。尽管联邦学习显示在数据无法共享或集中的情况下,作为一种关键方法有很大的希望,但目前的内衣显示的隐私特性有限,在应用到共同的现实世界情景时也有缺点。这种假设是各种设备之间的数据不一,数据可能来自不同的生成分布。在本文件中,我们提议一个联邦学习框架,利用专家的混合方法,平衡当地培训模式的专家性质和在联合学习环境中全球模型的一般知识。我们的结果显示,当数据不一时,专家的混合模式更适合作为个人化的装置模型,这比全球和当地模型都差。此外,我们的框架提供了严格的隐私保障,允许客户选择其数据中可能被排除在联邦之外的部分数据。评价表明,拟议的解决方案对于一些用户需要严格的隐私设置,并且不将其模型透露给中央服务器,从联邦中选择部分或全部。拟议的框架非常笼统,包括任何机器学习模型,甚至可以使用不同的组合。拟议的框架非常笼统,包括任何类型的机器学习模型。