In human interactions, emotion recognition is crucial. For this reason, the topic of computer-vision approaches for automatic emotion recognition is currently being extensively researched. Processing multi-channel electroencephalogram (EEG) information is one of the most researched methods for automatic emotion recognition. This paper presents a new model for an affective computing-driven Quality of Experience (QoE) prediction. In order to validate the proposed model, a publicly available dataset is used. The dataset contains EEG, ECG, and respiratory data and is focused on a multimedia QoE assessment context. The EEG data are retained on which the differential entropy and the power spectral density are calculated with an observation window of three seconds. These two features were extracted to train several deep-learning models to investigate the possibility of predicting QoE with five different factors. The performance of these models is compared, and the best model is optimized to improve the results. The best results were obtained with an LSTM-based model, presenting an F1-score from 68% to 78%. An analysis of the model and its features shows that the Delta frequency band is the least necessary, that two electrodes have a higher importance, and that two other electrodes have a very low impact on the model's performances.
翻译:暂无翻译