Recently, GPT-4 with Vision (GPT-4V) has demonstrated remarkable visual capabilities across various tasks, but its performance in emotion recognition has not been fully evaluated. To bridge this gap, we present the quantitative evaluation results of GPT-4V on 19 benchmark datasets covering 5 tasks: visual sentiment analysis, micro-expression recognition, facial emotion recognition, dynamic facial emotion recognition, and multimodal emotion recognition. This paper collectively refers to these tasks as ``Generalized Emotion Recognition (GER)''. Through experimental analysis, we observe that GPT-4V generally outperforms supervised systems in visual sentiment analysis, highlighting its powerful visual understanding capabilities. Meanwhile, GPT-4V shows the ability to integrate multimodal clues and exploit temporal information, which is also critical for emotion recognition. Despite these achievements, GPT-4V is primarily tailored for general-purpose domains, which cannot recognize micro-expressions that require specialized knowledge. To the best of our knowledge, this paper provides the first quantitative assessment of GPT-4V for the GER tasks, offering valuable insights to researchers in this field. It can also serve as a zero-shot benchmark for subsequent research. Our code and evaluation results are available at: https://github.com/zeroQiaoba/gpt4v-emotion.
翻译:暂无翻译