Classification of human emotions can play an essential role in the design and improvement of human-machine systems. While individual biological signals such as Electrocardiogram (ECG) and Electrodermal Activity (EDA) have been widely used for emotion recognition with machine learning methods, multimodal approaches generally fuse extracted features or final classification/regression results to boost performance. To enhance multimodal learning, we present a novel attentive cross-modal connection to share information between convolutional neural networks responsible for learning individual modalities. Specifically, these connections improve emotion classification by sharing intermediate representations among EDA and ECG and apply attention weights to the shared information, thus learning more effective multimodal embeddings. We perform experiments on the WESAD dataset to identify the best configuration of the proposed method for emotion classification. Our experiments show that the proposed approach is capable of learning strong multimodal representations and outperforms a number of baselines methods.
翻译:虽然电子心电图和电极活动等个别生物信号已被广泛用于机器学习方法的情感识别,但多式方法一般会结合提取的特征或最终分类/递减结果,以提高性能。为了提高多式学习,我们展示了一种新的关注的跨模式连接,以便在负责学习个人模式的进化神经网络之间共享信息。具体地说,这些连接通过在电子算法和电子算法之间共享中间代表来改进情感分类,对共享信息进行重视,从而学习更有效的多式联运嵌入。我们在WESAD数据集上进行了实验,以确定拟议的情感分类方法的最佳配置。我们的实验表明,拟议的方法能够学习强大的多式代表,并超越一些基线方法。