Domain generalization (DG) has been a hot topic in image recognition, with a goal to train a general model that can perform well on unseen domains. Recently, federated learning (FL), an emerging machine learning paradigm to train a global model from multiple decentralized clients without compromising data privacy, brings new challenges, also new possibilities, to DG. In the FL scenario, many existing state-of-the-art (SOTA) DG methods become ineffective, because they require the centralization of data from different domains during training. In this paper, we propose a novel domain generalization method for image recognition under federated learning through cross-client style transfer (CCST) without exchanging data samples. Our CCST method can lead to more uniform distributions of source clients, and thus make each local model learn to fit the image styles of all the clients to avoid the different model biases. Two types of style (single image style and overall domain style) with corresponding mechanisms are proposed to be chosen according to different scenarios. Our style representation is exceptionally lightweight and can hardly be used for the reconstruction of the dataset. The level of diversity is also flexible to be controlled with a hyper-parameter. Our method outperforms recent SOTA DG methods on two DG benchmarks (PACS, OfficeHome) and a large-scale medical image dataset (Camelyon17) in the FL setting. Last but not least, our method is orthogonal to many classic DG methods, achieving additive performance by combined utilization.
翻译:广域化( DG) 在图像识别方面是一个热门主题, 目标是训练一个在隐蔽域上表现良好的通用模型。 最近, 联合学习( FL) (FL) (FL) (FL) (FL) (FL) (FL) (FL) (FL) (FL) (FL) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DGDG) (DG) (DG) (DG) (DG) (DG) (DG) (D) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DB) (D(DG) (DG) (DG) (DG) (DG) (DB) (D) (D) (D) (DG) (DB) (D) (D) (D) (DG) (D(D) (D) (D) (D) (D(D) (DG) (DG) (D) (DB) (DG) (DG) (D) (D) (DB) (DB) (D) (D) (DG) (D(D) (D) (D) (D) (D) (D) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DG) (DS) (DB) (DG) (DS) (DS) (DS) (DB) (DS) (DG) (DG) (DG) (DG) (DS) (DS) (DG) (DG) (DG) (DG