项目名称: 基于情感上下文的视觉语音多模态协同情感分析方法研究
项目编号: No.61272211
项目类型: 面上项目
立项/批准年度: 2013
项目学科: 自动化技术、计算机技术
项目作者: 毛启容
作者单位: 江苏大学
项目金额: 78万元
中文摘要: 本项目旨在利用情感上下文、姿态、语音以及表情中包含大量情感信息且这些信息互为补充的特性,结合结构化稀疏表示和多情感代理协同分析,解决在部分通道信息丢失的情况下准确分析待分析者情感的问题。本项目主要研究基于情景和分析对象的情感上下文动态提取和分析方法,针对环境、情景、分析者个人信息、语音情感上下文以及视觉上下文研究不同的情感上下文动态提取方法;研究基于视频跟踪的两级姿态情感特征提取方法,从视频帧和视频序列两个角度研究分析对象姿态特征的实时提取方法;研究情感特征结构化稀疏表示方法,该方法能够表示出情感特征之间的关联关系。从而更具有分类鉴别能力;研究基于情感分类代理的多模态协同情感分析方法,融合多通道情感特征,通过情感分类代理的相互协商、协作,优势互补,提高自然交互环境下情感分析的准确性和鲁棒性。本项目的研究成果可应用于开发智能化、人性化的新型人机交互环境,将产生很好的经济效益和社会效益。
中文关键词: 情感分析;情感上下文;多模态协同决策;特征学习;自然环境
英文摘要: Vision-speech emotion analysis is one of key problems in harmonious human interaction. In this project, extracted methods of emotional context, postures, speech and expressions, strcutured sparse representation methods and emotion collaborative analysis methods based on emotion agents are researched to analysis emotions accurately in the spontaneous interation enviroment with missing data. The main research contents included: 1) Dynamic extration methods of emotion contextes based on environment and analyzed objects are researched, and these methods can get emotion context information dynamically from the aspects of environments, scenes of actions, personal information of objectes analyzed, speech and vision. 2) Two-level emotion feature extraction methods of postures are researched, and these methods can extract the emotion featrures of postures accurately from each video frame and a sequence of video frames on time. 3)Structured sparsity representation methods of emotion features with the distinguishing capability are researched. This emotion feature representation methods can express the relationships among emotion features and have the nonlinear recognition capability. 4)multi-modal collaborative analysis methods of emotions based on emotion classifing agents are researched. These methods can improve the acc
英文关键词: Emotion Analysis;Emotional Context;Multi-modal Fusion;Feature Learning;Spontaneous Environment