In many real-world situations, data is distributed across multiple locations and can't be combined for training. Federated learning is a novel distributed learning approach that allows multiple federating agents to jointly learn a model. While this approach might reduce the error each agent experiences, it also raises questions of fairness: to what extent can the error experienced by one agent be significantly lower than the error experienced by another agent? In this work, we consider two notions of fairness that each may be appropriate in different circumstances: egalitarian fairness (which aims to bound how dissimilar error rates can be) and proportional fairness (which aims to reward players for contributing more data). For egalitarian fairness, we obtain a tight multiplicative bound on how widely error rates can diverge between agents federating together. For proportional fairness, we show that sub-proportional error (relative to the number of data points contributed) is guaranteed for any individually rational federating coalition.
翻译:在许多真实情况下, 数据分布于多个不同地点, 无法结合到培训中。 联邦学习是一种新颖的分布式学习方法, 允许多个联合会代理人共同学习模型。 虽然这种方法可以减少每个代理人经历的错误, 但它也提出了公平问题: 一个代理人经历的错误在多大程度上能大大低于另一个代理人经历的错误? 在这项工作中, 我们考虑两种公平概念, 每一个概念在不同情况下都可能是合适的: 平等性( 目的是限制不同错误率如何不同) 和比例性公平( 目的是奖励参与者提供更多数据 ) 。 为了平等性, 我们获得一个紧密的倍倍倍增约束, 要求不同代理人结成的错误率能够有多大的差错。 为了比例公平性, 我们证明任何个体合理的联合都保证了次比例差( 相对于所贡献的数据点的数量) 。