Motivated by crowdsourced computation, peer-grading, and recommendation systems, Braver- man et al. [STOC'16] recently studied the query and round complexity of classic and popu- lar problems such as finding the maximum (max), finding all elements above a certain value (threshold-v) or computing the top-k elements of an array (top-k) in a noisy environment. An illustrating example is the task of selecting papers for a conference. This task is challenging due the crowdsourcing-nature of peer reviews: (1) the results of reviews are noisy and (2) it is necessary to parallelize the review process as much as possible. Thus, we study these fundamental problems in the noisy value model and the noisy comparison model: In the noisy value model, every review returns a value (e.g. accept). In the noisy comparison model (introduced in the seminal work of Feige et al. [SICOMP'94]) a reviewer is asked a comparative yes or no question: "Is paper i better than paper j?" In a first step, we show optimal worst-case upper and lower bounds on the round vs query complexity for max and top-k in all models. For threshold-v, we obtain optimal query complexity and nearly-optimal (i.e., optimal up to a factor O(log log k), where k is the size of the output) round complexity for all models. We then go beyond the worst-case and provide--for a large range of parameters instance-optimal algorithms (w.r.t. to the query complexity)-i.e., answering the question how important knowledge of the instance is. We complement these results by showing that for some family of instances, no instance-optimal algorithm can exist. Furthermore, we show that the value-and comparison-model are for most practical settings asymptotically equivalent (for all the above mentioned problems); on the other side, the value model is strictly easier than the comparison model in the case where the papers are totally ordered.
翻译:Braver- man et al. [STOC'16] 最近研究了经典和流行- pop- lar 问题的质询和全方位复杂性, 如找到最大值( 最大值), 发现在吵闹的环境中所有高于一定值的元素( 临界值- v), 或计算一个阵列( 顶点- k) 的顶点元素。 一个说明性的例子就是为会议选择文件的任务。 由于同行审评的众点- 复杂性, 此任务具有挑战性:(1) 审评的结果是吵闹的, (2) 有必要尽可能将审评进程平行化。 因此, 我们研究了这些最吵闹的值模型和噪音比较模型: 在噪音值模型中, 每次审查都返回一个值值值值的顶点( 例如接受) 。 在杂乱的比较模型中( 在Fege 和 al. [ SICOMP' 94] 的原始模型中, 一个审查者被问到一个比较是肯定的还是没有问题 。 在第一个步骤中, 我们显示的是最坏的上点和最坏的模型, 最坏的模型是最高和最低的模型。