Federated Learning is a distributed machine-learning environment that allows clients to learn collaboratively without sharing private data. This is accomplished by exchanging parameters. However, the differences in data distributions and computing resources among clients make related studies difficult. To address these heterogeneous problems, we propose a novel Federated Learning method. Our method utilizes a pre-trained model as the backbone of the local model, with fully connected layers comprising the head. The backbone extracts features for the head, and the embedding vector of classes is shared between clients to improve the head and enhance the performance of the local model. By sharing the embedding vector of classes instead of gradient-based parameters, clients can better adapt to private data, and communication between the server and clients is more effective. To protect privacy, we propose a privacy-preserving hybrid method that adds noise to the embedding vector of classes. This method has a minimal effect on the performance of the local model when differential privacy is met. We conduct a comprehensive evaluation of our approach on a self-built vehicle dataset, comparing it with other Federated Learning methods under non-independent identically distributed(Non-IID).
翻译:联邦学习是一种分布式的机器学习环境,使客户可以在不分享私人数据的情况下合作学习,这是通过交换参数实现的。但是,由于客户之间在数据分配和计算资源方面的差异,相关的研究很困难。为了解决这些不同的问题,我们提议了一个新的联邦学习方法。我们的方法使用预先训练的模式作为当地模式的支柱,由完全相连的层组成头部。头部的骨干抽取特征和班级嵌入矢量在客户之间共享,以改善头部,提高本地模型的性能。通过分享班级嵌入矢量,而不是梯度参数,客户可以更好地适应私人数据,服务器和客户之间的通信更加有效。为了保护隐私,我们建议一种为班级嵌入矢量增加噪音的保密混合方法。这种方法在满足不同隐私时对本地模型的性能影响很小。我们全面评估了自建车辆数据集的方法,将其与其他非独立分布式的联邦学习方法进行比较。</s>