Fair consensus building combines the preferences of multiple rankers into a single consensus ranking, while ensuring any ranked candidate group defined by a protected attribute (such as race or gender) is not disadvantaged compared to other groups. Manually generating a fair consensus ranking is time-consuming and impractical -- even for a fairly small number of candidates. While algorithmic approaches for auditing and generating fair consensus rankings have been developed recently, these have not been operationalized in interactive systems. To bridge this gap, we introduce FairFuse, a visualization-enabled tool for generating, analyzing, and auditing fair consensus rankings. In developing FairFuse, we construct a data model which includes several base rankings entered by rankers, augmented with measures of group fairness, algorithms for generating consensus rankings with varying degrees of fairness, and other fairness and rank-related capabilities. We design novel visualizations that encode these measures in a parallel-coordinates style rank visualization, with interactive capabilities for generating and exploring fair consensus rankings. We provide case studies in which FairFuse supports a decision-maker in ranking scenarios in which fairness is important. Finally, we discuss emerging challenges for future efforts supporting fairness-oriented rank analysis; including handling intersectionality, defined by multiple protected attributes, and the need for user studies targeting peoples' perceptions and use of fairness oriented visualization systems. Code and demo videos available at https://osf.io/hd639/.
翻译:建立公平共识的算法方法将多个排名者的偏好结合到单一的协商一致排名中,同时确保受保护的属性(如种族或性别)界定的任何排名候选群体与其他群体相比不处于不利地位。手工产生公平的协商一致排名既费时又不切实际 -- -- 甚至对于为数不多的候选人来说也是如此。虽然最近制定了审计和产生公平共识排名的算法方法,但这些方法尚未在互动系统中实施。为了缩小这一差距,我们引入了Fair Fuse,这是一个可视化工具,用于生成、分析和审计公平的共识排名。在开发公平论坛时,我们构建了一个数据模型,其中包括由排名者输入的若干基本排名,并辅之以群体公平措施、产生不同程度的共识排名的算法以及其他公平和级别相关能力。我们设计了新颖的视觉化方法,将这些措施编码成平行协调式的排序,并具有生成和探索公平共识排名的交互能力。我们提供了案例研究,其中Fair Fuse支持在排名中支持一个非常重要的决策制定者。最后,我们讨论了未来在支持面向公平、面向的排名分析方面所面临的挑战,包括保护面向用户的可理解的分类和民主的多重分析。