Facial affect analysis (FAA) using visual signals is a key step in human-computer interactions. Early methods mainly focus on extracting appearance and geometry features associated with human affects, while ignore the latent semantic information among individual facial changes, leading to limited performance and generalization. Recent trends attempt to establish a graph-based representation to model these semantic relationships and develop learning frameworks to leverage it for different FAA tasks. In this paper, we provide a comprehensive review of graph-based FAA, including the evolution of algorithms and their applications. First, we introduce the background knowledge of facial affect analysis, especially on the role of graph. We then discuss approaches that are widely used for graph-based affective representation in literatures and show a trend towards graph construction. For the relational reasoning in graph-based FAA, we categorize the existing studies according to their usage of traditional methods or deep models, with a special emphasis on latest graph neural networks. Experimental comparisons of the state-of-the-art on standard FAA problems are also summarized. Finally, we discuss the challenges and potential directions. As far as we know, this is the first survey of graph-based FAA methods, and our findings can serve as a reference point for future research in this field.
翻译:早期方法主要侧重于提取与人类影响有关的外观和几何特征,同时忽视个人面部变化中潜在的语义信息,导致绩效和概括化有限。最近的趋势试图建立基于图形的表述方式,以模拟这些语义关系,并开发学习框架,以利用这些语义关系来完成人类-计算机互动,在本文件中,我们提供了对基于图形的FAA的全面审查,包括算法的演变及其应用。首先,我们介绍了面部影响分析的背景知识,特别是图表的作用。然后我们讨论了在文献中广泛用于基于图表的感应表达方式的方法,并展示了图表构造的趋势。在基于图表的 FAA 中,我们根据现有研究使用传统方法或深层模型的情况,对现有的研究进行分类,特别侧重于最新的图形神经网络。对标准FAA问题的最新艺术进行实验性比较。最后,我们讨论了挑战和潜在方向。据我们所知,这是对基于图表的实地研究结果进行第一次实地调查。