Convolutional neural networks (CNNs) are widely used to recognize the user's state through electroencephalography (EEG) signals. In the previous studies, the EEG signals are usually fed into the CNNs in the form of high-dimensional raw data. However, this approach makes it difficult to exploit the brain connectivity information that can be effective in describing the functional brain network and estimating the perceptual state of the user. We introduce a new classification system that utilizes brain connectivity with a CNN and validate its effectiveness via the emotional video classification by using three different types of connectivity measures. Furthermore, two data-driven methods to construct the connectivity matrix are proposed to maximize classification performance. Further analysis reveals that the level of concentration of the brain connectivity related to the emotional property of the target video is correlated with classification performance.
翻译:进化神经网络(CNNs)被广泛用来通过电子脑图学信号识别用户状态,在以往的研究中,EEG信号通常以高维原始数据的形式输入CNN,但这一方法使得难以利用脑连通信息来有效地描述功能脑网络和估计用户的感知状态。我们引入了新的分类系统,利用CNN进行脑连通,并通过使用三种不同类型的连通措施进行情感视频分类来验证其有效性。此外,建议采用两种数据驱动方法构建连接矩阵,以最大限度地提高分类性能。进一步的分析显示,与目标视频情感特性有关的大脑连通程度与分类性表现相关。