We study the model of metric voting proposed by Feldman et al. [2020]. In this model, experts and candidates are located in a metric space, and each candidate possesses a quality that is independent of her location. An expert evaluates each candidate as the candidate's quality less a bias term--the distance between the candidate and the expert in the metric space. The expert then votes for her favorite candidate. The goal is to select a voting rule and a committee of experts to mitigate the bias. More specifically, given $m$ candidates, what is the minimum number of experts needed to ensure that the voting rule selects a candidate whose quality is at most $\varepsilon$ worse than the best one? Our first main result is a new way to select the committee using exponentially less experts compared to the method proposed in Feldman et al. [2020]. Our second main result is a novel construction that substantially improves the lower bound on the committee size. Indeed, our upper and lower bounds match in terms of $m$, the number of candidates, and $\varepsilon$, the desired accuracy, for general convex normed spaces, and differ by a multiplicative factor that only depends on the dimension of the underlying normed space but is independent of other parameters of the problem. We extend the nearly matching upper and lower bounds to the setting in which each expert returns a ranking of her top $k$ candidates and we wish to choose $\ell$ candidates with cumulative quality at most $\varepsilon$ worse than that of the best set of $\ell$ candidates, settling an open problem of Feldman et al. [2020]. Finally, we consider the setting where there are multiple rounds of voting. We show that by introducing another round of voting, the number of experts needed to guarantee the selection of an $\varepsilon$-optimal candidate becomes independent of the number of candidates.
翻译:暂无翻译