Federated learning (FL) is a paradigm that allows distributed clients to learn a shared machine learning model without sharing their sensitive training data. While largely decentralized, FL requires resources to fund a central orchestrator or to reimburse contributors of datasets to incentivize participation. Inspired by insights from prior-free auction design, we propose a mechanism, FIPFA (Federated Incentive Payments via Prior-Free Auctions), to collect monetary contributions from self-interested clients. The mechanism operates in the semi-honest trust model and works even if clients have a heterogeneous interest in receiving high-quality models, and the server does not know the clients' level of interest. We run experiments on the MNIST dataset to test clients' model quality under FIPFA and FIPFA's incentive properties.
翻译:联邦学习(FL)是一种范例,它允许分布式客户学习共享的机器学习模式,而不必分享敏感的培训数据。虽然基本上分散化,但FL需要资源资助一个中央管弦乐队或偿还数据集提供者,以激励参与。在事先免费拍卖设计的洞察力的启发下,我们提议了一个机制,即FIPFA(通过事先免费拍卖的鼓励性付款联合会),以收集自利客户的货币捐款。这个机制在半诚实信任模式下运作,即使客户对接受高质量模型有不同的兴趣,而且服务器不知道客户的兴趣程度。我们根据FIPFA和FIPFA的激励性能,对MNIST数据集进行了测试客户模式质量的实验。