Algorithmic bias is of increasing concern, both to the research community, and society at large. Bias in AI is more abstract and unintuitive than traditional forms of discrimination and can be more difficult to detect and mitigate. A clear gap exists in the current literature on evaluating the relative bias in the performance of multi-class classifiers. In this work, we propose two simple yet effective metrics, Combined Error Variance (CEV) and Symmetric Distance Error (SDE), to quantitatively evaluate the class-wise bias of two models in comparison to one another. By evaluating the performance of these new metrics and by demonstrating their practical application, we show that they can be used to measure fairness as well as bias. These demonstrations show that our metrics can address specific needs for measuring bias in multi-class classification.
翻译:分析偏差对于研究界和整个社会来说都日益引起关注。AI中的偏见比传统形式的歧视更抽象,更不直观,更难以发现和减轻。目前关于评估多级分类人员工作表现相对偏差的文献中存在着明显差距。在这项工作中,我们提出了两个简单而有效的衡量标准,即综合错误差异和对称距离错误(SDE ), 以量化方式评估两个模型相对对等的等级偏差。通过评估这些新指标的性能并展示其实际应用,我们表明它们可以用来衡量公平和偏见。这些显示我们的衡量标准可以满足计量多级分类中偏差的具体需求。