Federated learning enables machine learning algorithms to be trained over a network of multiple decentralized edge devices without requiring the exchange of local datasets. Successfully deploying federated learning requires ensuring that agents (e.g., mobile devices) faithfully execute the intended algorithm, which has been largely overlooked in the literature. In this study, we first use risk bounds to analyze how the key feature of federated learning, unbalanced and non-i.i.d. data, affects agents' incentives to voluntarily participate and obediently follow traditional federated learning algorithms. To be more specific, our analysis reveals that agents with less typical data distributions and relatively more samples are more likely to opt out of or tamper with federated learning algorithms. To this end, we formulate the first faithful implementation problem of federated learning and design two faithful federated learning mechanisms which satisfy economic properties, scalability, and privacy. Further, the time complexity of computing all agents' payments in the number of agents is $\mathcal{O}(1)$. First, we design a Faithful Federated Learning (FFL) mechanism which approximates the Vickrey-Clarke-Groves (VCG) payments via an incremental computation. We show that it achieves (probably approximate) optimality, faithful implementation, voluntary participation, and some other economic properties (such as budget balance). Second, by partitioning agents into several subsets, we present a scalable VCG mechanism approximation. We further design a scalable and Differentially Private FFL (DP-FFL) mechanism, the first differentially private faithful mechanism, that maintains the economic properties. Our mechanism enables one to make three-way performance tradeoffs among privacy, the iterations needed, and payment accuracy loss.
翻译:联邦学习能够使机器学习算法在多个分散边缘装置的网络中接受培训,而不需要交换本地数据集。成功部署联邦学习需要确保代理商(例如移动设备)忠实地执行预期算法,文献中基本上忽视了这种算法。在本研究中,我们首先使用风险圈分析联合会学习、不平衡和非隐私的关键特征。此外,数据影响代理商自愿参与的激励和顺从地遵循传统的联邦化学习算法。更具体地说,我们的分析表明,数据分布较少和样本较多的代理商更可能选择退出或修改联邦化学习算法。为此,我们制定了第一个忠实的联邦学习执行问题,并设计了两个忠实的联邦化学习机制,满足了经济属性、可缩放和隐私。此外,计算所有代理商支付金额的时间复杂性是$\mathcal=oqalality{O}(1)。 首先,我们设计了一个忠实的联邦学习(FFL)机制,这个机制可以接近 Vickrealalality-calalationalations a sqreal dealalalality developations developations exalalalalalalations) a weqreal exalalalalalalalalal ex exalizesalizations exalalal 需要一种我们的快速化的会计机制。我们需要一个精精准的精度,我们的精度。