As affective robots become integral in human life, these agents must be able to fairly evaluate human affective expressions without discriminating against specific demographic groups. Identifying bias in Machine Learning (ML) systems as a critical problem, different approaches have been proposed to mitigate such biases in the models both at data and algorithmic levels. In this work, we propose Continual Learning (CL) as an effective strategy to enhance fairness in Facial Expression Recognition (FER) systems, guarding against biases arising from imbalances in data distributions. We compare different state-of-the-art bias mitigation approaches with CL-based strategies for fairness on expression recognition and Action Unit (AU) detection tasks using popular benchmarks for each; RAF-DB and BP4D. Our experiments show that CL-based methods, on average, outperform popular bias mitigation techniques, strengthening the need for further investigation into CL for the development of fairer FER algorithms.
翻译:随着有感情的机器人成为人类生活中不可分割的一部分,这些代理人必须能够公平地评估人类的情感表达方式,而不会歧视特定人口群体。确定机器学习(ML)系统中的偏见是一个关键问题,提出了在数据和算法层面减少模型中的这种偏见的不同办法。在这项工作中,我们建议以持续学习(CL)为有效战略,加强偏爱表达承认(FER)系统中的公平性,防范数据分布不平衡所产生的偏见。我们比较了以CL为基础的不同最先进的减少偏见方法,将基于CL为基础的公平表达认识和行动股(AU)的检测任务与使用每种通用基准的公平性战略加以比较;RAF-DB和BP4D。我们的实验表明,基于CL的方法一般都超越了流行的减少偏见技术,因此更有必要进一步调查CL,以发展更公平的FE算法。