Federated Learning (FL) is an increasingly popular machine learning paradigm in which multiple nodes try to collaboratively learn under privacy, communication and multiple heterogeneity constraints. A persistent problem in federated learning is that it is not clear what the optimization objective should be: the standard average risk minimization of supervised learning is inadequate in handling several major constraints specific to federated learning, such as communication adaptivity and personalization control. We identify several key desiderata in frameworks for federated learning and introduce a new framework, FLIX, that takes into account the unique challenges brought by federated learning. FLIX has a standard finite-sum form, which enables practitioners to tap into the immense wealth of existing (potentially non-local) methods for distributed optimization. Through a smart initialization that does not require any communication, FLIX does not require the use of local steps but is still provably capable of performing dissimilarity regularization on par with local methods. We give several algorithms for solving the FLIX formulation efficiently under communication constraints. Finally, we corroborate our theoretical results with extensive experimentation.
翻译:联邦学习联盟(FL)是一个日益流行的机器学习模式,其中多个节点试图在隐私、通信和多种异质制约下合作学习。联邦学习中持续存在的一个问题是,不清楚最优化目标应该是什么:监督学习的标准平均风险最小化不足以处理联邦学习特有的若干主要制约因素,如通信适应性和个人化控制。我们在联邦学习框架中确定了若干关键分层,并引入了一个新的框架FLIX,其中考虑到联邦学习带来的独特挑战。FLIX有一个标准的限定和形式,使实践者能够挖掘现有的(潜在的非本地)分配优化方法的巨大财富。通过智能初始化,不需要任何沟通,FLIX不需要使用本地步骤,但仍然可以与本地方法同步进行差异化规范。我们给出了几种算法,以便在沟通制约下高效地解决FLIX的配置。最后,我们用广泛的实验来证实我们的理论结果。