Micro-expressions have drawn increasing interest lately due to various potential applications. The task is, however, difficult as it incorporates many challenges from the fields of computer vision, machine learning and emotional sciences. Due to the spontaneous and subtle characteristics of micro-expressions, the available training and testing data are limited, which make evaluation complex. We show that data leakage and fragmented evaluation protocols are issues among the micro-expression literature. We find that fixing data leaks can drastically reduce model performance, in some cases even making the models perform similarly to a random classifier. To this end, we go through common pitfalls, propose a new standardized evaluation protocol using facial action units with over 2000 micro-expression samples, and provide an open source library that implements the evaluation protocols in a standardized manner. Code is publicly available in \url{https://github.com/tvaranka/meb}.
翻译:最近,由于各种潜在应用,微量表达方式引起了越来越多的兴趣。然而,这项任务由于包含计算机视觉、机器学习和情感科学领域的许多挑战而困难重重。由于微量表达方式的自发和微妙特点,现有培训和测试数据有限,使得评价复杂。我们表明数据泄漏和零散的评价程序是微量表达方式文献中的问题。我们发现,修复数据泄漏可能大大降低模型性能,有时甚至使模型与随机分类器类似。为此,我们通过共同的陷阱,利用有2000多个微量表达样本的面部行动单位提出新的标准化评价协议,并提供一个以标准化方式执行评价协议的开放源图书馆。守则可在以下网站公开查阅:https://github.com/tvaranka/meb}。</s>