Despite being responsible for state-of-the-art results in several computer vision and natural language processing tasks, neural networks have faced harsh criticism due to some of their current shortcomings. One of them is that neural networks are correlation machines prone to model biases within the data instead of focusing on actual useful causal relationships. This problem is particularly serious in application domains affected by aspects such as race, gender, and age. To prevent models from incurring on unfair decision-making, the AI community has concentrated efforts in correcting algorithmic biases, giving rise to the research area now widely known as fairness in AI. In this survey paper, we provide an in-depth overview of the main debiasing methods for fairness-aware neural networks in the context of vision and language research. We propose a novel taxonomy to better organize the literature on debiasing methods for fairness, and we discuss the current challenges, trends, and important future work directions for the interested researcher and practitioner.
翻译:神经网络尽管对几项计算机视觉和自然语言处理任务的最新成果负有责任,但由于目前存在的一些缺陷而面临严厉的批评,其中之一是神经网络是互相关联的机器,在数据中很容易产生偏向,而不是注重实际有用的因果关系。这个问题在受种族、性别和年龄等各方面影响的应用领域尤为严重。为了防止模型引发不公平决策,大赦国际社区集中努力纠正算法偏差,从而产生了目前在大赦国际中广为人知的公平性研究领域。在本调查文件中,我们深入概述了在视觉和语言研究中公平觉悟神经网络的主要偏向性方法。我们建议进行新的分类,以便更好地组织关于偏向公平方法的文献,我们讨论当前的挑战、趋势和感兴趣的研究人员和从业人员今后的重要工作方向。