We prove in this paper that, perhaps surprisingly, incentivizing data misreporting is not a fatality. By leveraging a careful design of the loss function, we propose Licchavi, a global and personalized learning framework with provable strategyproofness guarantees. Essentially, we prove that no user can gain much by replying to Licchavi's queries with answers that deviate from their true preferences. Interestingly, Licchavi also promotes the desirable "one person, one unit-force vote" fairness principle. Furthermore, our empirical evaluation of its performance showcases Licchavi's real-world applicability. We believe that our results are critical for the safety of any learning scheme that leverages user-generated data.
翻译:我们在本文中证明,也许令人惊讶的是,激励数据误报并不是致命的。我们通过谨慎设计损失功能,提出利查维(Licchavi),这是一个全球和个性化的学习框架,有可辨别的战略防守保证。从根本上说,我们证明,通过回答利查维的询问而得到的答案与他们真正的偏好不同,任何用户都没有多大好处。有趣的是,利查维(Licchavi)也提倡理想的“一个人,一个单位-力量投票”公平原则。此外,我们对其业绩的实证评估展示了利查维(Licchavi)真实世界的适用性。我们相信,我们的结果对于利用用户生成的数据的任何学习计划的安全性至关重要。