The number of papers submitted to academic conferences is steadily rising in many scientific disciplines. To handle this growth, systems for automatic paper-reviewer assignments are increasingly used during the reviewing process. These systems use statistical topic models to characterize the content of submissions and automate the assignment to reviewers. In this paper, we show that this automation can be manipulated using adversarial learning. We propose an attack that adapts a given paper so that it misleads the assignment and selects its own reviewers. Our attack is based on a novel optimization strategy that alternates between the feature space and problem space to realize unobtrusive changes to the paper. To evaluate the feasibility of our attack, we simulate the paper-reviewer assignment of an actual security conference (IEEE S&P) with 165 reviewers on the program committee. Our results show that we can successfully select and remove reviewers without access to the assignment system. Moreover, we demonstrate that the manipulated papers remain plausible and are often indistinguishable from benign submissions.
翻译:在许多学科中,提交的论文数量不断增加。为了应对增长,越来越多的学术会议在审稿过程中使用自动论文-评审人分配系统。这些系统使用统计主题模型来表征提交的内容并自动分配给评审人。在本文中,我们展示了通过对抗学习可以操纵这种自动化。我们提出了一种攻击,使论文伪装成其他主题,欺骗分配系统,选择自己的评审人。我们的攻击基于一种新的优化策略,交替在特征空间和问题空间之间实现对论文的不显眼的修改。为了评估我们的攻击的可行性,我们使用165个评审人的实际安全会议(IEEE S&P)模拟了论文-评审人分配。我们的结果表明,我们可以成功选择和删除评审人,而无需访问分配系统。此外,我们证明了经过操纵的论文保持可信,通常与良性提交无法区分。