Federated Learning (FL) has recently emerged as a possible way to tackle the domain shift in real-world Semantic Segmentation (SS) without compromising the private nature of the collected data. However, most of the existing works on FL unrealistically assume labeled data in the remote clients. Here we propose a novel task (FFREEDA) in which the clients' data is unlabeled and the server accesses a source labeled dataset for pre-training only. To solve FFREEDA, we propose LADD, which leverages the knowledge of the pre-trained model by employing self-supervision with ad-hoc regularization techniques for local training and introducing a novel federated clustered aggregation scheme based on the clients' style. Our experiments show that our algorithm is able to efficiently tackle the new task outperforming existing approaches. The code is available at https://github.com/Erosinho13/LADD.
翻译:联邦学习联盟(FL)最近成为解决现实世界语义分割(SS)域变的可行办法,但不影响所收集数据的私人性质。然而,FL的现有工作大多不切实际地假定远程客户的标签数据。我们在此提议一项新的任务(FFREEDA),即客户的数据不贴标签,服务器只访问培训前的源标签数据集。为了解决FFREEDA,我们提议LAD, 利用预先培训模式的知识,在本地培训中采用自监督技术,并采用基于客户风格的新颖的联邦集成计划。我们的实验表明,我们的算法能够有效地处理新任务,而不能使用现有方法。该代码可在https://github.com/Erosinho13/LADD中查阅。