Thanks to large-scale labeled training data, deep neural networks (DNNs) have obtained remarkable success in many vision and multimedia tasks. However, because of the presence of domain shift, the learned knowledge of the well-trained DNNs cannot be well generalized to new domains or datasets that have few labels. Unsupervised domain adaptation (UDA) studies the problem of transferring models trained on one labeled source domain to another unlabeled target domain. In this paper, we focus on UDA in visual emotion analysis for both emotion distribution learning and dominant emotion classification. Specifically, we design a novel end-to-end cycle-consistent adversarial model, termed CycleEmotionGAN++. First, we generate an adapted domain to align the source and target domains on the pixel-level by improving CycleGAN with a multi-scale structured cycle-consistency loss. During the image translation, we propose a dynamic emotional semantic consistency loss to preserve the emotion labels of the source images. Second, we train a transferable task classifier on the adapted domain with feature-level alignment between the adapted and target domains. We conduct extensive UDA experiments on the Flickr-LDL & Twitter-LDL datasets for distribution learning and ArtPhoto & FI datasets for emotion classification. The results demonstrate the significant improvements yielded by the proposed CycleEmotionGAN++ as compared to state-of-the-art UDA approaches.
翻译:由于有大规模标签的培训数据,深神经网络在许多视觉和多媒体任务中取得了显著的成功,然而,由于存在域变,受过良好训练的DNNN的知识无法被广泛推广到没有标签的新领域或数据集。无监督的域适应(UDA)研究将一个标签源域培训的模型转让到另一个无标签的目标领域的问题。在本文中,我们在情感分布学习和主导情感分类的视觉情感情感分析中侧重于UDA。具体地说,我们设计了一个新型的终端到终端周期一致的对抗模式,称为CyelEpealGAN+++。首先,我们产生了一个经调整的域,以便通过改进ScycellGAN,使源和目标领域在像素层次上相协调。在图像翻译过程中,我们提议动态的情绪中性一致性损失,以维护源图像的情感标签。我们训练一个可转让的任务分类,在适应和目标领域之间,在调适的终端-循环周期-周期性对等域间,我们生成了一个调整的源-周期-周期性GAN+。我们用广泛演示了源-LDA数据实验,以展示UFDDA数据,以展示大量的FLDA数据,以学习FDDDA数据库数据,以展示。