Federated Learning (FL) is a promising distributed learning paradigm, which allows a number of data owners (also called clients) to collaboratively learn a shared model without disclosing each client's data. However, FL may fail to proceed properly, amid a state that we call negative federated learning (NFL). This paper addresses the problem of negative federated learning. We formulate a rigorous definition of NFL and analyze its essential cause. We propose a novel framework called LINDT for tackling NFL in run-time. The framework can potentially work with any neural-network-based FL systems for NFL detection and recovery. Specifically, we introduce a metric for detecting NFL from the server. On occasion of NFL recovery, the framework makes adaptation to the federated model on each client's local data by learning a Layer-wise Intertwined Dual-model. Experiment results show that the proposed approach can significantly improve the performance of FL on local data in various scenarios of NFL.
翻译:联邦学习组织(FL)是一个有希望的分布式学习模式,它使一些数据所有者(也称为客户)能够合作学习一个共享模式,而不披露每个客户的数据。然而,在我们称之为消极联合学习(NFL)的状态下,FL可能无法顺利开展工作。本文涉及消极联合学习的问题。我们为NFL制定严格的定义并分析其基本原因。我们提出了一个名为LINDT的新框架,用于在运行时处理NFL。这个框架有可能与任何以神经网络为基础的FL系统合作,用于NFL的检测和恢复。具体地说,我们引入了一个从服务器探测NFLL的衡量标准。在NFL恢复时,该框架通过学习一种多层次的双赢双模,使每个客户的本地数据适应了Federg模式。实验结果显示,拟议的方法可以大大改进FL在NFL的各种情景下对当地数据的绩效。