The study proposes and tests a technique for automated emotion recognition through mouth detection via Convolutional Neural Networks (CNN), meant to be applied for supporting people with health disorders with communication skills issues (e.g. muscle wasting, stroke, autism, or, more simply, pain) in order to recognize emotions and generate real-time feedback, or data feeding supporting systems. The software system starts the computation identifying if a face is present on the acquired image, then it looks for the mouth location and extracts the corresponding features. Both tasks are carried out using Haar Feature-based Classifiers, which guarantee fast execution and promising performance. If our previous works focused on visual micro-expressions for personalized training on a single user, this strategy aims to train the system also on generalized faces data sets.
翻译:研究提出并测试一种通过进化神经网络(CNN)对口部进行检测的自动情绪识别技术,旨在用沟通技巧问题(如肌肉消瘦、中风、自闭症或更简单的是疼痛)支持有健康障碍的人,以便识别情绪并产生实时反馈或数据输入支持系统。软件系统开始计算是否在获得的图像上出现面孔,然后寻找口部位置并提取相应的特征。这两项任务都使用基于haar地貌的分类,保证快速执行和有希望的性能。如果我们以前的工作侧重于对单一用户进行个性化培训的视觉微表情,那么本战略也是为了在通用的面部数据集上对系统进行培训。