In vision and linguistics; the main input modalities are facial expressions, speech patterns, and the words uttered. The issue with analysis of any one mode of expression (Visual, Verbal or Vocal) is that lot of contextual information can get lost. This asks researchers to inspect multiple modalities to get a thorough understanding of the cross-modal dependencies and temporal context of the situation to analyze the expression. This work attempts at preserving the long-range dependencies within and across different modalities, which would be bottle-necked by the use of recurrent networks and adds the concept of delta-attention to focus on local differences per modality to capture the idiosyncrasy of different people. We explore a cross-attention fusion technique to get the global view of the emotion expressed through these delta-self-attended modalities, in order to fuse all the local nuances and global context together. The addition of attention is new to the multi-modal fusion field and currently being scrutinized for on what stage the attention mechanism should be used, this work achieves competitive accuracy for overall and per-class classification which is close to the current state-of-the-art with almost half number of parameters.
翻译:视觉和语言方面; 主要的输入模式是面部表达方式、 语言模式和语言表达方式。 分析任何一种表达方式( 视觉、 语言或Vocal ) 所涉及的问题是, 大量背景信息可能会丢失。 这要求研究人员检查多种模式, 以彻底了解该表达方式的跨模式依赖性和时间背景, 以便分析该表达方式。 这项工作试图保护不同模式内和跨不同模式的长距离依赖性, 这些模式将因使用经常性网络而受瓶颈限制, 并增加了三角注意概念, 侧重于每个模式的地方差异, 以捕捉不同人群的特异性。 我们探索一种交叉注意融合技术, 以获得关于通过这些三角形自闭模式表达的情感的全球观点, 以便将所有本地的细微差别和全球背景结合在一起。 增加的注意力对于多模式融合领域来说是全新的, 并且目前正在对使用何种关注机制进行仔细审查, 这项工作在接近当前状态的半项参数的整体和单级分类方面实现了竞争性的准确性。