Federated learning allows mobile clients to jointly train a global model without sending their private data to a central server. Despite that extensive works have studied the performance guarantee of the global model, it is still unclear how each individual client influences the collaborative training process. In this work, we defined a novel notion, called {\em Fed-Influence}, to quantify this influence in terms of model parameter, and proposed an effective and efficient estimation algorithm. In particular, our design satisfies several desirable properties: (1) it requires neither retraining nor retracing, adding only linear computational overhead to clients and the server; (2) it strictly maintains the tenet of federated learning, without revealing any client's local data; and (3) it works well on both convex and non-convex loss functions and does not require the final model to be optimal. Empirical results on a synthetic dataset and the FEMNIST dataset show that our estimation method can approximate Fed-Influence with small bias. Further, we demonstrated an application of client-level model debugging.
翻译:联邦学习允许移动客户在不向中央服务器发送其私人数据的情况下联合培训全球模型。 尽管大量工作研究了全球模型的绩效保障,但仍不清楚每个客户如何影响协作培训过程。 在这项工作中,我们定义了一个新概念,称为`em Fed-impact',用模型参数来量化这种影响,并提出了一个有效和高效的估算算法。特别是,我们的设计满足了若干可取的属性:(1) 它既不需要再培训,也不需要收回,只给客户和服务器增加线性计算间接费用;(2) 它严格维护联合学习的教义,不透露任何客户的本地数据;(3) 它在 convex 和非 convex 损失功能上运作良好,不要求最终模式是最佳的。合成数据集和FEMNIST数据集的实证结果显示,我们的估算方法可以小偏差地接近Fed-Impt。 此外,我们展示了客户级模型调试的应用。