Deep neural networks are highly susceptible to learning biases in visual data. While various methods have been proposed to mitigate such bias, the majority require explicit knowledge of the biases present in the training data in order to mitigate. We argue the relevance of exploring methods which are completely ignorant of the presence of any bias, but are capable of identifying and mitigating them. Furthermore, we propose using Bayesian neural networks with an epistemic uncertainty-weighted loss function to dynamically identify potential bias in individual training samples and to weight them during training. We find a positive correlation between samples subject to bias and higher epistemic uncertainties. Finally, we show the method has potential to mitigate visual bias on a bias benchmark dataset and on a real-world face detection problem, and we consider the merits and weaknesses of our approach.
翻译:深神经网络极易在视觉数据中学习偏见。虽然提出了减少这种偏见的各种方法,但大多数都要求明确了解培训数据中存在的偏见,以便减轻这种偏见。我们主张探讨完全不知道存在任何偏见但能够查明和减轻偏见的方法的重要性。此外,我们提议利用具有隐喻不确定性和加权损失功能的贝亚神经网络动态地查明个别培训样本中的潜在偏见并在培训中加权这些偏见。我们发现,受偏见和高度认知不确定性影响的样本之间存在正相关关系。最后,我们表明,这种方法有可能减少偏见基准数据集和现实世界中存在的视觉偏见,我们考虑我们方法的优点和弱点。