Discriminative unsupervised learning methods such as contrastive learning have demonstrated the ability to learn generalized visual representations on centralized data. It is nonetheless challenging to adapt such methods to a distributed system with unlabeled, private, and heterogeneous client data due to user styles and preferences. Federated learning enables multiple clients to collectively learn a global model without provoking any privacy breach between local clients. On the other hand, another direction of federated learning studies personalized methods to address the local heterogeneity. However, work on solving both generalization and personalization without labels in a decentralized setting remains unfamiliar. In this work, we propose a novel method, FedStyle, to learn a more generalized global model by infusing local style information with local content information for contrastive learning, and to learn more personalized local models by inducing local style information for downstream tasks. The style information is extracted by contrasting original local data with strongly augmented local data (Sobel filtered images). Through extensive experiments with linear evaluations in both IID and non-IID settings, we demonstrate that FedStyle outperforms both the generalization baseline methods and personalization baseline methods in a stylized decentralized setting. Through comprehensive ablations, we demonstrate our design of style infusion and stylized personalization improve performance significantly.
翻译:对比式学习等差异性、不受监督的学习方法显示有能力学习集中化数据的一般直观表述,然而,由于用户风格和偏好,将这类方法适应一个分布式系统,使用无标签、私人和多样化的客户数据,仍然具有挑战性;由于用户的风格和偏好,联邦学习使多个客户能够集体学习一个全球模型,而不会引起当地客户之间的隐私侵犯。另一方面,联邦学习研究的另一个方向是个人化方法,以解决地方差异性。然而,在分散化环境中,解决通用和个性化问题的工作仍然不为人所知。在这项工作中,我们提出了一种创新方法,即FedStyle,通过将本地风格信息与本地内容信息结合,用于对比性学习,学习更具个性化的地方模型,而不会引起当地客户之间的隐私侵犯。而另一方面,通过对比原始的本地数据与强度增强的本地数据( Sobel 过滤图像) 来获取这种信息。通过在IID和非IID环境中进行广泛的线性评估,我们证明FedStrefleferferforforforforforformformation, 将我们的个人型基线设计系统化方法大大地升级化了。