Decoding emotional states from human brain activity plays an important role in brain-computer interfaces. Existing emotion decoding methods still have two main limitations: one is only decoding a single emotion category from a brain activity pattern and the decoded emotion categories are coarse-grained, which is inconsistent with the complex emotional expression of human; the other is ignoring the discrepancy of emotion expression between the left and right hemispheres of human brain. In this paper, we propose a novel multi-view multi-label hybrid model for fine-grained emotion decoding (up to 80 emotion categories) which can learn the expressive neural representations and predicting multiple emotional states simultaneously. Specifically, the generative component of our hybrid model is parametrized by a multi-view variational auto-encoder, in which we regard the brain activity of left and right hemispheres and their difference as three distinct views, and use the product of expert mechanism in its inference network. The discriminative component of our hybrid model is implemented by a multi-label classification network with an asymmetric focal loss. For more accurate emotion decoding, we first adopt a label-aware module for emotion-specific neural representations learning and then model the dependency of emotional states by a masked self-attention mechanism. Extensive experiments on two visually evoked emotional datasets show the superiority of our method.
翻译:人类大脑活动的分解情绪状态在大脑-计算机界面中起着重要作用。 现有的情感解码方法仍然有两个主要局限性:其中之一是从大脑活动模式中解码单一的情感类别,而解码的情感类别则是粗糙的,这与人类复杂的情感表达方式不相符;另一是忽视人类大脑左半球和右半球之间的情感表达差异。在本文中,我们提出了一个新型的多视图多标签多标签多标签混合模式,用于精细的情感解码(高达80个情感类别),它可以同时学习直观神经表征和预测多个情感状态。具体地说,我们混合模型的基因部分被一个多视图变异自动编码器分解,其中我们把左半球和右半球的大脑活动及其差异视为三种截然不同的观点,并且使用专家机制的产物来进行推断网络。我们混合模式的歧视性成分是由一个多标签分类网络实施的,这个网络可以同时了解直观神经神经表达方式,并同时预测多个情感状态。为了更准确的情感解析,我们首先通过一个多视角变动式的情感结构化的自我分析模型,我们采用了一个双向式的自我分析模式的自我分析模型模型模型模型,以学习一个自我分析的自我分析的自我定位的自我分析模式的自我分析模型。