Cross-Database Micro-Expression Recognition (CDMER) aims to develop the Micro-Expression Recognition (MER) methods that satisfy different conditions (equipment, subjects, and scenes) in practical application, i.e., the MER method of strong domain adaption ability. CDMER faces two obstacles: 1) the severe feature distribution gap between the training and test databases and 2) the feature representation bottleneck for micro-expression (ME) such local and subtle facial expressions. To solve these obstacles, this paper proposes a novel Transfer Group Sparse Regression method, namely TGSR, which seeks and selects those salient facial regions to 1) promote a more precise measurement of the difference between source and target databases by the operation in the feature level to alleviate their difference better, and to 2) improve the extracted hand-crafted feature to be more effective and explicable for better MER. We use two public micro-expression databases, i.e., CASME II and SMIC, to evaluate the proposed TGSR. Experimental results show that TGSR learns the discriminative feature and outperforms most state-of-the-art subspace-learning-based domain adaption methods for CDMER.
翻译:为克服这些障碍,本文件建议采用新的传输组粗略反射法,即TGSR,该方法寻求和选择这些突出的面部区域,以便1)促进更精确地测量功能级操作的源和目标数据库之间的差异,以更好地缩小其差异,2)改进所提取的手工艺特征,以便更有效和可复制,更好地改进MER。 我们使用两个公共微表达数据库,即CASME II和SMIC来评估拟议的TGSR。