The success of federated learning (FL) ultimately depends on how strategic participants behave under partial observability, yet most formulations still treat FL as a static optimization problem. We instead view FL deployments as governed strategic systems and develop an analytical framework that separates welfare-improving behavior from metric gaming. Within this framework, we introduce indices that quantify manipulability, the price of gaming, and the price of cooperation, and we use them to study how rules, information disclosure, evaluation metrics, and aggregator-switching policies reshape incentives and cooperation patterns. We derive threshold conditions for deterring harmful gaming while preserving benign cooperation, and for triggering auto-switch rules when early-warning indicators become critical. Building on these results, we construct a design toolkit including a governance checklist and a simple audit-budget allocation algorithm with a provable performance guarantee. Simulations across diverse stylized environments and a federated learning case study consistently match the qualitative and quantitative patterns predicted by our framework. Taken together, our results provide design principles and operational guidelines for reducing metric gaming while sustaining stable, high-welfare cooperation in FL platforms.
翻译:联邦学习(FL)的成功最终取决于战略参与者在部分可观测性下的行为模式,然而现有研究大多仍将FL视为静态优化问题。本文提出将FL部署视为受规则约束的战略系统,并构建了一个分析框架,以区分福利提升行为与指标博弈行为。在该框架中,我们引入可量化操纵性、博弈代价及合作代价的指标,用以研究规则设计、信息披露机制、评估指标及聚合器切换策略如何重塑激励与合作模式。我们推导出抑制有害博弈同时保留良性合作的阈值条件,以及预警指标达到临界值时触发自动切换规则的判定标准。基于上述成果,我们构建了一套设计工具包,包含治理检查清单和具有可证明性能保障的简易审计预算分配算法。在多种典型仿真环境及联邦学习案例研究中,实验结果均与本框架预测的定性与定量模式一致。综合而言,本研究为FL平台提供了降低指标博弈、维持稳定高效合作的设计原则与操作指南。