This work describes different strategies to generate unsupervised representations obtained through the concept of self-taught learning for facial emotion recognition (FER). The idea is to create complementary representations promoting diversity by varying the autoencoders' initialization, architecture, and training data. SVM, Bagging, Random Forest, and a dynamic ensemble selection method are evaluated as final classification methods. Experimental results on Jaffe and Cohn-Kanade datasets using a leave-one-subject-out protocol show that FER methods based on the proposed diverse representations compare favorably against state-of-the-art approaches that also explore unsupervised feature learning.
翻译:这项工作描述了产生通过自我学习概念获得的不受监督的面部情绪识别表象的不同战略。其想法是通过改变自动校对器的初始化、结构和培训数据来创建促进多样性的补充表象。SVM、Bucking、随机森林和动态混合选择方法被评价为最后分类方法。Jaffe和Cohn-Kanade数据集使用放任一题协议的实验结果显示,基于拟议多样性表象的FE方法与探索未受监督特征学习的最先进方法相比,比较优异。