The quantified measurement of facial expressiveness is crucial to analyze human affective behavior at scale. Unfortunately, methods for expressiveness quantification at the video frame-level are largely unexplored, unlike the study of discrete expression. In this work, we propose an algorithm that quantifies facial expressiveness using a bounded, continuous expressiveness score using multimodal facial features, such as action units (AUs), landmarks, head pose, and gaze. The proposed algorithm more heavily weights AUs with high intensities and large temporal changes. The proposed algorithm can compute the expressiveness in terms of discrete expression, and can be used to perform tasks including facial behavior tracking and subjectivity quantification in context. Our results on benchmark datasets show the proposed algorithm is effective in terms of capturing temporal changes and expressiveness, measuring subjective differences in context, and extracting useful insight.
翻译:量化的面部表情表达度测量对于分析规模人类感官行为至关重要。 不幸的是,视频框架层面的表情量化方法基本上尚未探索,这与对离散表达的研究不同。 在这项工作中,我们提出一种算法,利用多式面部特征,如动作单位(AUs)、标志、头部姿势和凝视等,对面部表情进行捆绑、连续连续的表情表达度评分。建议的算法,对高度强度和时间变化大的AU进行更重的权重。提议的算法可以计算离散表达的表达度,并可用于执行包括面部行为跟踪和主观量化内容在内的任务。我们在基准数据集上的结果显示,拟议的算法在捕捉时间变化和表情特征、测量环境上的主观差异以及提取有用的洞察力方面是有效的。