In real world datasets, particular groups are under-represented, much rarer than others, and machine learning classifiers will often preform worse on under-represented populations. This problem is aggravated across many domains where datasets are class imbalanced, with a minority class far rarer than the majority class. Naive approaches to handle under-representation and class imbalance include training sub-population specific classifiers that handle class imbalance or training a global classifier that overlooks sub-population disparities and aims to achieve high overall accuracy by handling class imbalance. In this study, we find that these approaches are vulnerable in class imbalanced datasets with minority sub-populations. We introduced Fair-Net, a branched multitask neural network architecture that improves both classification accuracy and probability calibration across identifiable sub-populations in class imbalanced datasets. Fair-Nets is a straightforward extension to the output layer and error function of a network, so can be incorporated in far more complex architectures. Empirical studies with three real world benchmark datasets demonstrate that Fair-Net improves classification and calibration performance, substantially reducing performance disparity between gender and racial sub-populations.
翻译:在真实的世界数据集中,特定群体的代表性不足,比其他群体少得多,而机器学习分类者往往在代表性不足的人群中出现更差的预兆。这个问题在许多领域都更加严重,因为在这些领域,数据集是阶级不平衡的,少数阶层比多数阶层少得多。处理代表性不足和阶级不平衡的原始方法包括培训处理阶级不平衡的亚人口分类者,或培训一个忽视亚人口差异的全球性分类者,目的是通过处理阶级不平衡来达到较高的总体准确性。在这项研究中,我们发现这些方法在与少数阶层分群的阶级不平衡数据集中很脆弱。我们引入了公平网络,这是一个分支的多任务神经网络结构,在阶级不平衡的数据集中,可提高分类准确性和概率校准可识别的亚人群。公平网络是对一个网络产出层和错误功能的直接延伸,因此可以纳入远为复杂的结构中。与三个真实的世界基准数据集进行的实证研究表明公平网络改进了分类和校准性,大大缩小了性别和种族分人口之间的性差。