Facial expressions are important parts of both gesture and sign language recognition systems. Despite the recent advances in both fields, annotated facial expression datasets in the context of sign language are still scarce resources. In this manuscript, we introduce an annotated sequenced facial expression dataset in the context of sign language, comprising over $3000$ facial images extracted from the daily news and weather forecast of the public tv-station PHOENIX. Unlike the majority of currently existing facial expression datasets, FePh provides sequenced semi-blurry facial images with different head poses, orientations, and movements. In addition, in the majority of images, identities are mouthing the words, which makes the data more challenging. To annotate this dataset we consider primary, secondary, and tertiary dyads of seven basic emotions of "sad", "surprise", "fear", "angry", "neutral", "disgust", and "happy". We also considered the "None" class if the image's facial expression could not be described by any of the aforementioned emotions. Although we provide FePh as a facial expression dataset of signers in sign language, it has a wider application in gesture recognition and Human Computer Interaction (HCI) systems.
翻译:面部表达方式是手势和手语识别系统的重要部分。 尽管最近这两个领域都取得了进步, 手语背景下附加说明的面部表达数据集仍然是稀缺的资源。 在本手稿中, 我们引入了手语背景下附加说明的面部表达数据集, 由公共 tv-stage PHOENIX 的每日新闻和天气预报中提取的超过3000美元的面部图像组成。 与大多数现有的面部表达数据集不同, FePh 提供了有不同头部、 方向和动作的半泡面部图像的序列。 此外, 在大多数图像中, 身份正在张面部表达, 使得数据更具挑战性。 在手语中, 我们给出了FePh, 使数据更具挑战性。 在“ sad”、“surpropris”、“fear”、“angry”、“中立”、“diggust” 和“happy ” 等七个基本情感, 基本情绪“ speappy ” 。 我们还考虑过“ None” 类, 如果图像的面部表达方式不能被任何描述的话。 尽管我们提供了FePh 的图像作为面部表达式数据互动的识别系统, 。