Action Unit (AU) Detection is the branch of affective computing that aims at recognizing unitary facial muscular movements. It is key to unlock unbiased computational face representations and has therefore aroused great interest in the past few years. One of the main obstacles toward building efficient deep learning based AU detection system is the lack of wide facial image databases annotated by AU experts. In that extent the ABAW challenge paves the way toward better AU detection as it involves a 2M frames AU annotated dataset. In this paper, we present our submission to the ABAW3 challenge. In a nutshell, we applied a multi-label detection transformer that leverage multi-head attention to learn which part of the face image is the most relevant to predict each AU.
翻译:行动股(AU)检测是旨在识别单一面部肌肉运动的感官计算分支,是开放公正计算面部表情的关键,因此在过去几年引起了极大兴趣。建立高效深层学习的非盟检测系统的主要障碍之一是缺乏非盟专家附加说明的广泛面部图像数据库。在这方面,ABAW挑战为改进非盟的检测铺平了道路,因为它涉及一个2M框架的非盟附加说明数据集。在本文中,我们向ABAAW3挑战提交我们的文件。在一段简而言之,我们应用了一个多标签检测变压器,利用多头的注意力来了解哪些面部图像与预测每个非盟最为相关。