Micro-expressions are hard to spot due to fleeting and involuntary moments of facial muscles. Interpretation of micro emotions from video clips is a challenging task. In this paper we propose an affective-motion imaging that cumulates rapid and short-lived variational information of micro expressions into a single response. Moreover, we have proposed an AffectiveNet:affective-motion feature learning network that can perceive subtle changes and learns the most discriminative dynamic features to describe the emotion classes. The AffectiveNet holds two blocks: MICRoFeat and MFL block. MICRoFeat block conserves the scale-invariant features, which allows network to capture both coarse and tiny edge variations. While MFL block learns micro-level dynamic variations from two different intermediate convolutional layers. Effectiveness of the proposed network is tested over four datasets by using two experimental setups: person independent (PI) and cross dataset (CD) validation. The experimental results of the proposed network outperforms the state-of-the-art approaches with significant margin for MER approaches.
翻译:由于面部肌肉的瞬间和非自愿的瞬间,微表情很难被发现。从视频剪辑中解读微情感是一项艰巨的任务。在本文件中,我们提议了将微表达的快速和短寿命变异信息累积成一个单一反应的感动感动成像。此外,我们提议了AffectiveNet:affective-move 地貌学习网络,它能够感知微妙的变化,并学习最有歧视的动态特征来描述情感类别。AffectiveNet有两块:MICRoFeat和MFL块。MICRoFeat块保存了规模变异性特征,使网络能够捕捉粗略和微边缘变异。MFL区从两个不同的中间进化层学习微层次的动态变异。拟议网络的效力通过使用两个实验性设置(个人独立(PI)和交叉数据集(CD)来测试四个数据集。拟议的网络的实验结果超越了具有显著市场外观的状态方法。