Peer review cannot work unless qualified and interested reviewers are assigned to each paper. Nearly all automated reviewer assignment approaches estimate real-valued affinity scores for each paper-reviewer pair that act as proxies for the predicted quality of a future review; conferences then assign reviewers to maximize the sum of these values. This procedure does not account for noise in affinity score computation -- reviewers can only bid on a small number of papers, and textual similarity models are inherently probabilistic estimators. In this work, we assume paper-reviewer affinity scores are estimated using a probabilistic model. Using these probabilistic estimates, we bound the scores with high probability and maximize the worst-case sum of scores for a reviewer allocation. Although we do not directly recommend any particular method for estimation of probabilistic affinity scores, we demonstrate how to robustly maximize the sum of scores across multiple different models. Our general approach can be used to integrate a large variety of probabilistic paper-reviewer affinity models into reviewer assignment, opening the door to a much more robust peer review process.
翻译:几乎所有自动审评员派任方法都估计作为未来审评预测质量代理人的每对纸评审员的实际估算亲和率分数;然后,会议指派审评员以尽量扩大这些数值的总和;这一程序在亲和分数计算中没有考虑到噪音 -- -- 审评员只能对少量论文进行投标,文本相似模型本身就具有概率性估计。在这项工作中,我们假定纸面审评员的亲和率分数是使用概率模型估算的。我们利用这些概率估计,将评分绑在一起,并尽量扩大最坏的得分数总和,以便进行审评员分配。虽然我们并不直接建议任何具体的方法来估计概率接近得分,但我们证明如何将多种不同模型的得和数的得之和稳健地最大化。我们的一般方法可以用来将大量概率性纸面审评员的亲和率模型纳入审评员的指派,从而打开一个更加可靠的同行审查过程的大门。