In this work, we quantitatively calibrate the performance of global and local models in federated learning through a multi-criterion optimization-based framework, which we cast as a constrained program. The objective of a device is its local objective, which it seeks to minimize while satisfying nonlinear constraints that quantify the proximity between the local and the global model. By considering the Lagrangian relaxation of this problem, we develop a novel primal-dual method called Federated Learning Beyond Consensus (\texttt{FedBC}). Theoretically, we establish that \texttt{FedBC} converges to a first-order stationary point at rates that matches the state of the art, up to an additional error term that depends on a tolerance parameter introduced to scalarize the multi-criterion formulation. Finally, we demonstrate that \texttt{FedBC} balances the global and local model test accuracy metrics across a suite of datasets (Synthetic, MNIST, CIFAR-10, Shakespeare), achieving competitive performance with state-of-the-art.
翻译:在这项工作中,我们通过一个多标准优化框架对全球和地方模式在联合学习中的绩效进行定量校准,我们将此框架作为受限程序推出。一个装置的目标是它的地方目标,它力求在满足非线性限制以量化当地与全球模型之间的距离的同时,最大限度地减少这种限制。通过考虑拉格朗加省对这一问题的放松,我们开发了一个名为“超越共识的联邦学习”的新颖的原始双向方法(textt{FedBC})。理论上,我们确定\textt{FedBC}与一级固定点相匹配,其比率与艺术状态相匹配,再加一个差点,该错误点取决于引入的容忍参数来使多级模型的配制升级。最后,我们证明“textt{FedBC}”平衡了一套数据集(合成、MNIST、CIFAR-10、莎士比)的全球和地方模型测试精确度指标,实现与状态技术的竞争性。