Federated learning aims to train models collaboratively across different clients without the sharing of data for privacy considerations. However, one major challenge for this learning paradigm is the {\em data heterogeneity} problem, which refers to the discrepancies between the local data distributions among various clients. To tackle this problem, we first study how data heterogeneity affects the representations of the globally aggregated models. Interestingly, we find that heterogeneous data results in the global model suffering from severe {\em dimensional collapse}, in which representations tend to reside in a lower-dimensional space instead of the ambient space. Moreover, we observe a similar phenomenon on models locally trained on each client and deduce that the dimensional collapse on the global model is inherited from local models. In addition, we theoretically analyze the gradient flow dynamics to shed light on how data heterogeneity result in dimensional collapse for local models. To remedy this problem caused by the data heterogeneity, we propose {\sc FedDecorr}, a novel method that can effectively mitigate dimensional collapse in federated learning. Specifically, {\sc FedDecorr} applies a regularization term during local training that encourages different dimensions of representations to be uncorrelated. {\sc FedDecorr}, which is implementation-friendly and computationally-efficient, yields consistent improvements over baselines on standard benchmark datasets. Code: https://github.com/bytedance/FedDecorr.
翻译:联邦学习的目的是在不共享隐私考虑数据的情况下,在不同客户之间合作培训模型,而没有共享隐私考虑的数据。然而,这一学习范式面临的一个主要挑战就是“数据异质性”问题,这是指不同客户之间当地数据分布的差异。为了解决这一问题,我们首先研究数据异质性如何影响全球综合模型的表达方式。有趣的是,我们发现,数据异质性导致全球模型遭受严重(超维)崩溃,在这种模型中,代表往往居住在低维空间而不是环境空间。此外,我们观察到当地就每个客户培训的模型上存在类似现象,并推断全球模型的天性崩溃是从当地模型上继承下来的。此外,我们从理论上分析梯度流动态,以说明数据异质性如何影响全球综合模型的表达方式的表达方式。为了补救数据异质性数据异质性造成的问题,我们建议采用一种新型方法,可以有效减轻在联合学习中的维度崩溃。具体地说,关于FedDecory标准/节性基准化的模型应用一个正规化术语,在不同的培训中鼓励与当地标准/标准化基准值的计算。</s>