Several recent works encourage the use of a Bayesian framework when assessing performance and fairness metrics of a classification algorithm in a supervised setting. We propose the Uncertainty Matters (UM) framework that generalizes a Beta-Binomial approach to derive the posterior distribution of any criteria combination, allowing stable performance assessment in a bias-aware setting.We suggest modeling the confusion matrix of each demographic group using a Multinomial distribution updated through a Bayesian procedure. We extend UM to be applicable under the popular K-fold cross-validation procedure. Experiments highlight the benefits of UM over classical evaluation frameworks regarding informativeness and stability.
翻译:最近的一些工作鼓励在评估受监督环境中的分类算法的业绩和公平度量时使用巴耶斯框架,我们提议采用不确定事项框架,将Beta-Binomial方法笼统地概括起来,得出任何标准组合的后方分布,允许在有偏差的环境中进行稳定的业绩评估。我们建议采用通过巴耶斯程序更新的多族裔分布模式,模拟每个人口群体的混乱矩阵。我们扩大了微额分配法的适用范围,使之在流行的K倍交叉验证程序下适用。实验突出表明微额分配优于关于信息性和稳定性的传统评价框架。