Mainstream machine learning conferences have seen a dramatic increase in the number of participants, along with a growing range of perspectives, in recent years. Members of the machine learning community are likely to overhear allegations ranging from randomness of acceptance decisions to institutional bias. In this work, we critically analyze the review process through a comprehensive study of papers submitted to ICLR between 2017 and 2020. We quantify reproducibility/randomness in review scores and acceptance decisions, and examine whether scores correlate with paper impact. Our findings suggest strong institutional bias in accept/reject decisions, even after controlling for paper quality. Furthermore, we find evidence for a gender gap, with female authors receiving lower scores, lower acceptance rates, and fewer citations per paper than their male counterparts. We conclude our work with recommendations for future conference organizers.
翻译:近年来,主流机器学习会议参与者人数急剧增加,观点也越来越多,机器学习界成员可能会听到从接受决定随机性到体制偏见等各种指控。在这项工作中,我们通过全面研究2017年至2020年期间提交给国际研究中心的文件,对审查过程进行批判性分析。我们在审查评分和接受决定时量化重复/随机性,并审查得分是否与纸张影响相关。我们的调查结果表明,即使在控制了纸张质量之后,在接受/拒绝决定方面存在着严重的体制偏见。此外,我们发现有证据表明存在性别差距,女性作者的得分较低,接受率较低,每份文件的引用率低于男性同行。我们在结束我们的工作时,将建议未来的会议组织者。