Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy. However, it is not always feasible to obtain high-quality supervisory signals from users, especially for vision tasks. Unlike typical federated settings with labeled client data, we consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server. We further take the server-client and inter-client domain shifts into account and pose a domain adaptation problem with one source (centralized server data) and multiple targets (distributed client data). Within this new Federated Multi-Target Domain Adaptation (FMTDA) task, we analyze the model performance of exiting domain adaptation methods and propose an effective DualAdapt method to address the new challenges. Extensive experimental results on image classification and semantic segmentation tasks demonstrate that our method achieves high accuracy, incurs minimal communication cost, and requires low computational resources on client devices.
翻译:联邦学习方法使我们能够在保护隐私的同时,对分布式用户数据进行机器学习模式的培训,但是,从用户获取高质量的监督信号并非始终可行,特别是用于愿景任务。与标签客户数据的典型结合环境不同,我们考虑一种更为实际的设想方案,即分布式客户数据没有标签,服务器上可提供集中式标签数据集。我们进一步将服务器-客户和客户间域的转换考虑在内,并对一个来源(集中式服务器数据)和多个目标(分散式客户数据)造成域适应问题。在这项新的联邦多目标适应(FMTDA)任务中,我们分析了外域适应方法的模型性能,并提出了应对新挑战的有效双亚适应方法。关于图像分类和语义分割任务的广泛实验结果表明,我们的方法达到了很高的准确性,降低了通信成本,并且需要客户设备上低计算资源。