In 2023, the International Conference on Machine Learning (ICML) required authors with multiple submissions to rank their submissions based on perceived quality. In this paper, we aim to employ these author-specified rankings to enhance peer review in machine learning and artificial intelligence conferences by extending the Isotonic Mechanism (Su, 2021, 2022) to exponential family distributions. This mechanism generates adjusted scores closely align with the original scores while adhering to author-specified rankings. Despite its applicability to a broad spectrum of exponential family distributions, this mechanism's implementation does not necessitate knowledge of the specific distribution form. We demonstrate that an author is incentivized to provide accurate rankings when her utility takes the form of a convex additive function of the adjusted review scores. For a certain subclass of exponential family distributions, we prove that the author reports truthfully only if the question involves only pairwise comparisons between her submissions, thus indicating the optimality of ranking in truthful information elicitation. Lastly, we show that the adjusted scores improve dramatically the accuracy of the original scores and achieve nearly minimax optimality for estimating the true scores with statistical consistecy when true scores have bounded total variation.
翻译:2023年,国际机器学习会议(ICML)要求提交多篇论文的作者根据其质量进行排名。在本文中,我们旨在利用这些作者指定的排名,通过将等保机制(Su,2021,2022)扩展到指数族分布,以增强机器学习和人工智能会议的同行评审。该机制生成的调整分数与原始分数密切对应,同时遵守作者指定的排名。尽管该机制适用于广泛的指数族分布,但其实施不需要知道具体的分布形式。我们证明,当作者的效用形式为调整后的审稿得分的凸加函数时,作者看到其奖励后将被激励提供准确的排名。对于某类指数族分布的子类,我们证明只有当问题涉及她的提交之间的两两比较时,作者报告的排名是真实的,从而表明排名在真实信息收集中的最优性。最后,我们展示了调整分数显著提高了原始分数的准确性,并且在真实的分数具有有界总变异性时实现了统计一致性,实现了近乎极小值的最优化。