Federated learning (FL) is a promising distributed framework for collaborative artificial intelligence model training while protecting user privacy. A bootstrapping component that has attracted significant research attention is the design of incentive mechanism to stimulate user collaboration in FL. The majority of works adopt a broker-centric approach to help the central operator to attract participants and further obtain a well-trained model. Few works consider forging participant-centric collaboration among participants to pursue an FL model for their common interests, which induces dramatic differences in incentive mechanism design from the broker-centric FL. To coordinate the selfish and heterogeneous participants, we propose a novel analytic framework for incentivizing effective and efficient collaborations for participant-centric FL. Specifically, we respectively propose two novel game models for contribution-oblivious FL (COFL) and contribution-aware FL (CAFL), where the latter one implements a minimum contribution threshold mechanism. We further analyze the uniqueness and existence for Nash equilibrium of both COFL and CAFL games and design efficient algorithms to achieve equilibrium solutions. Extensive performance evaluations show that there exists free-riding phenomenon in COFL, which can be greatly alleviated through the adoption of CAFL model with the optimized minimum threshold.
翻译:联邦学习(FL)是合作人工智能模式培训的一个很有希望的分布式框架,它既保护用户隐私,又保护用户隐私。一个引起大量研究注意的辅助性组成部分是设计奖励机制以促进FL用户合作的激励机制。大多数著作都采用了中介中心经营人中心吸引参与者和进一步获得良好培训模式。很少有工作考虑在参与者之间建立参与者中心协作模式,以追求FL的共同利益,这在奖励机制设计方面引起与中介中心FL的显著差异。为了协调自私和多样化的参与者,我们提出了一个新颖的分析框架,以激励以参与者为中心的FL进行有成效和高效率的合作。具体地说,我们分别提出了两种新的游戏模式,即:即Obli FL(COFL)和CAFL(CFL),后者实行最低贡献门槛机制。我们进一步分析CFL和CAL游戏的纳什平衡的独特性和存在,并设计实现平衡解决方案的有效算法。广泛的绩效评估表明,COFLLL存在自由约束现象,通过采用CAFLA最低门槛可以大大减缓。