In our multicultural world, affect-aware AI systems that support humans need the ability to perceive affect across variations in emotion expression patterns across cultures. These models must perform well in cultural contexts on which they have not been trained. A standard assumption in affective computing is that affect recognition models trained and used within the same culture (intracultural) will perform better than models trained on one culture and used on different cultures (intercultural). We test this assumption and present the first systematic study of intercultural affect recognition models using videos of real-world dyadic interactions from six cultures. We develop an attention-based feature selection approach under temporal causal discovery to identify behavioral cues that can be leveraged in intercultural affect recognition models. Across all six cultures, our findings demonstrate that intercultural affect recognition models were as effective or more effective than intracultural models. We identify and contribute useful behavioral features for intercultural affect recognition; facial features from the visual modality were more useful than the audio modality in this study's context. Our paper presents a proof-of-concept and motivation for the future development of intercultural affect recognition systems.
翻译:在我们的多文化世界中,支持人类的有影响的人工智能系统需要有能力去感知不同文化间情感表达模式的差异。这些模型必须在没有经过培训的文化环境中运行良好。感知计算的标准假设是,影响在同一文化(内部文化)中培训和使用的承认模式的标准假设将比在一种文化上培训并用于不同文化(跨文化)的模型效果更好。我们测试这一假设,并使用来自六种文化的真实世界dyadic互动视频,首次系统研究不同文化间认识模式。我们在时间因果发现中开发基于关注的特征选择方法,以确定文化间认识模式中可以利用的行为提示。在所有六种文化中,我们的调查结果表明,文化间认识模式对承认模式的影响比文化内文化内模式的效力或更大。我们确定并促进文化间认识的有益行为特征;视觉模式的面部特征比本研究中的音频模式更有用。我们的文件为文化间认识系统的未来发展提供了证据和动力。