In this paper, we derive an algorithmic fairness metric from the fairness notion of equal opportunity for equally qualified candidates for recommendation algorithms commonly used by two-sided marketplaces. We borrow from the economic literature on discrimination to arrive at a test for detecting bias that is solely attributable to the algorithm, as opposed to other sources such as societal inequality or human bias on the part of platform users. We use the proposed method to measure and quantify algorithmic bias with respect to gender of two algorithms used by LinkedIn, a popular online platform used by job seekers and employers. Moreover, we introduce a framework and the rationale for distinguishing algorithmic bias from human bias, both of which can potentially exist on a two-sided platform where algorithms make recommendations to human users. Finally, we discuss the shortcomings of a few other common algorithmic fairness metrics and why they do not capture the fairness notion of equal opportunity for equally qualified candidates.
翻译:在本文中,我们从两面市场常用的建议算法同等合格候选人机会均等的公平概念中得出一个算法公平衡量标准。我们借用关于歧视的经济文献,以检测完全归因于算法的偏见,而不是平台用户的社会不平等或人类偏见等其他来源的偏见。我们使用拟议方法衡量和量化求职者和雇主使用的受欢迎的在线平台LinkedIn使用的两种算法在性别方面的算法偏差。此外,我们引入了区分算法偏差和人类偏差的框架和理由,两者都可能存在于一个双面平台上,在这种平台上,算法可以向人类用户提出建议。最后,我们讨论了其他几个共同的算法公平指标的缺点,以及为什么它们没有抓住同等合格候选人机会平等的公平概念。