Detection models trained by one party (server) may face severe performance degradation when distributed to other users (clients). For example, in autonomous driving scenarios, different driving environments may bring obvious domain shifts, which lead to biases in model predictions. Federated learning that has emerged in recent years can enable multi-party collaborative training without leaking client data. In this paper, we focus on a special cross-domain scenario where the server contains large-scale data and multiple clients only contain a small amount of data; meanwhile, there exist differences in data distributions among the clients. In this case, traditional federated learning techniques cannot take into account the learning of both the global knowledge of all participants and the personalized knowledge of a specific client. To make up for this limitation, we propose a cross-domain federated object detection framework, named FedOD. In order to learn both the global knowledge and the personalized knowledge in different domains, the proposed framework first performs the federated training to obtain a public global aggregated model through multi-teacher distillation, and sends the aggregated model back to each client for finetuning its personalized local model. After very few rounds of communication, on each client we can perform weighted ensemble inference on the public global model and the personalized local model. With the ensemble, the generalization performance of the client-side model can outperform a single model with the same parameter scale. We establish a federated object detection dataset which has significant background differences and instance differences based on multiple public autonomous driving datasets, and then conduct extensive experiments on the dataset. The experimental results validate the effectiveness of the proposed method.
翻译:由一方( 服务器) 培训过的检测模型在分发给其他用户( 客户) 时可能面临严重的性能退化。 例如, 在自主驱动情景中, 不同的驱动环境可能会带来明显的领域变化, 导致模型预测中的偏差。 最近几年出现的联邦学习可以使多方合作培训,而不会泄漏客户数据。 在本文中, 我们侧重于一个特殊的跨领域假设, 服务器包含大规模数据, 多客户只包含少量数据; 同时, 客户之间在数据分配方面存在差异。 在这种情况下, 传统的联合学习技术无法既考虑到所有参与者的全球知识的学习,也考虑到特定客户的个人知识。 为了弥补这一限制, 我们建议了一个跨多面化的联邦化对象检测框架, 而不是泄漏客户数据。 为了学习全球知识和不同领域的个人化知识, 拟议的框架首先进行联合培训, 以便通过多教师蒸馏方法获得一个公开的模型汇总模型, 然后将汇总模型发送给每个客户, 以调整其个人化的背景, 个人化的自主性能知识。 为了弥补这一限制, 我们建议了一个跨多面的硬度的硬度实验, 每个客户在一次测试中, 我们可以进行一个重要的个人级的实验式的, 将一个普通的客户级数据 进行一次测试。