Attributes skew hinders the current federated learning (FL) frameworks from consistent optimization directions among the clients, which inevitably leads to performance reduction and unstable convergence. The core problems lie in that: 1) Domain-specific attributes, which are non-causal and only locally valid, are indeliberately mixed into global aggregation. 2) The one-stage optimizations of entangled attributes cannot simultaneously satisfy two conflicting objectives, i.e., generalization and personalization. To cope with these, we proposed disentangled federated learning (DFL) to disentangle the domain-specific and cross-invariant attributes into two complementary branches, which are trained by the proposed alternating local-global optimization independently. Importantly, convergence analysis proves that the FL system can be stably converged even if incomplete client models participate in the global aggregation, which greatly expands the application scope of FL. Extensive experiments verify that DFL facilitates FL with higher performance, better interpretability, and faster convergence rate, compared with SOTA FL methods on both manually synthesized and realistic attributes skew datasets.
翻译:属性折叠的属性阻碍目前联合学习框架的用户之间一致优化方向,这不可避免地导致业绩下降和不稳定融合。核心问题在于:(1) 非因果性且只在本地有效的特定域属性被有意地混合到全球汇总中。(2) 相互缠绕的属性的单阶段优化不能同时满足两个相互矛盾的目标,即一般化和个人化。为了应对这些目标,我们提议分解的混合学习(DFL)将特定域和交叉变量属性分解成两个互补分支,这两个分支由拟议的本地-全球互换优化独立培训。 重要的是,趋同分析证明,即使不完全的客户模式参与全球汇总,FL系统也可以令人刺切地趋同,因为后者大大扩大了FL的应用范围。广泛的实验证明,DFLL为FL提供了更高的性能、更好的解释性和更快的趋同率,与SOTA FL方法相比,在手动合成和现实的属性折叠数据设置方面都是比较的。