This paper does not contain technical novelty but introduces our key discoveries in a data generation protocol, a database and insights.We aim to address the lack of large-scale datasets in micro-expression (MiE) recognition due to the prohibitive cost of data collection, which renders large-scale training less feasible. To this end, we develop a protocol to automatically synthesize large scale MiE training data that allow us to train improved recognition models for real-world test data. Specifically, we discover three types of Action Units (AUs) that can constitute trainable MiEs. These AUs come from real-world MiEs, early frames of macro-expression videos, and the relationship between AUs and expression categories defined by human expert knowledge. With these AUs, our protocol then employs large numbers of face images of various identities and an off-the-shelf face generator for MiE synthesis, yielding the MiE-X dataset. MiE recognition models are trained or pre-trained on MiE-X and evaluated on real-world test sets, where very competitive accuracy is obtained. Experimental results not only validate the effectiveness of the discovered AUs and MiE-X dataset but also reveal some interesting properties of MiEs: they generalize across faces, are close to early-stage macro-expressions, and can be manually defined.
翻译:本文不包含技术创新,而是在数据生成协议、数据库和洞察力中介绍我们的主要发现。 我们的目标是解决微表达(MIE)识别中缺乏大规模数据集的问题,因为数据收集费用高昂,导致大规模培训不那么可行。 为此,我们制定了一个协议,以自动合成大型 MiE培训数据,使我们能够为真实世界测试数据培训改进的识别模型。具体地说,我们发现了三种可以构成可培训 MiE的MiE(AUs)类型的行动单位(AUs),这些AU来自现实世界的MiEs,宏观表达视频的早期框架,以及人类专家知识界定的AUs和表达类别之间的关系。有了这些AUs,我们的协议随后采用了大量各种身份的面部图像和一个现成的MiE合成桌面生成器,产生了MiE-X数据集。 MIE识别模型在MiE-X上受过培训或预先培训,并在现实世界测试组中进行了评估,获得了非常有竞争力的精确性。 实验结果不仅验证了所发现的AUS和MiE-Simalalal assionalalal asetalal asetal astial aset:它们也可以近地展示了整个数据。