Fair consensus building combines the preferences of multiple rankers into a single consensus ranking, while ensuring any group defined by a protected attribute (such as race or gender) is not disadvantaged compared to other groups. Manually generating a fair consensus ranking is time-consuming and impractical -- even for a fairly small number of candidates. While algorithmic approaches for auditing and generating fair consensus rankings have been developed, these have not been operationalized in interactive systems. To bridge this gap, we introduce FairFuse, a visualization system for generating, analyzing, and auditing fair consensus rankings. We construct a data model which includes base rankings entered by rankers, augmented with measures of group fairness, and algorithms for generating consensus rankings with varying degrees of fairness. We design novel visualizations that encode these measures in a parallel-coordinates style rank visualization, with interactions for generating and exploring fair consensus rankings. We describe use cases in which FairFuse supports a decision-maker in ranking scenarios in which fairness is important, and discuss emerging challenges for future efforts supporting fairness-oriented rank analysis. Code and demo videos available at https://osf.io/hd639/.
翻译:公平共识的建立将多个排名者的偏好结合到单一的协商一致排名中,同时确保受保护属性(如种族或性别)界定的任何群体与其他群体相比不会处于不利地位。手工产生公平的协商一致排名既费时又不切实际 -- -- 甚至对于为数不多的候选人来说也是如此。虽然已经制定了审计和产生公平共识排名的算法方法,但这些尚未在互动系统中实施。为了缩小这一差距,我们引入了公平Fuse,这是产生、分析和审计公平共识排名的可视化系统。我们构建了一个数据模型,其中包括由排名者输入的基础排名,辅之以群体公平措施,以及制定具有不同程度公平程度的协商一致排名的算法。我们设计了新颖的可视化方法,将这些措施编码成平行协调的样式排序,为产生和探索公平共识排名的互动。我们描述了使用FairFuse支持决策者在非常重要的排名情景中支持公平性,并讨论支持公平性级别分析的未来努力面临的新挑战。在 https://osf.io/hd639/ 中提供代码和演示视频。