In image classification, it is often expensive and time-consuming to acquire sufficient labels. To solve this problem, domain adaptation often provides an attractive option given a large amount of labeled data from a similar nature but different domain. Existing approaches mainly align the distributions of representations extracted by a single structure and the representations may only contain partial information, e.g., only contain part of the saturation, brightness, and hue information. Along this line, we propose Multi-Representation Adaptation which can dramatically improve the classification accuracy for cross-domain image classification and specially aims to align the distributions of multiple representations extracted by a hybrid structure named Inception Adaptation Module (IAM). Based on this, we present Multi-Representation Adaptation Network (MRAN) to accomplish the cross-domain image classification task via multi-representation alignment which can capture the information from different aspects. In addition, we extend Maximum Mean Discrepancy (MMD) to compute the adaptation loss. Our approach can be easily implemented by extending most feed-forward models with IAM, and the network can be trained efficiently via back-propagation. Experiments conducted on three benchmark image datasets demonstrate the effectiveness of MRAN. The code has been available at https://github.com/easezyc/deep-transfer-learning.
翻译:在图像分类中,获取足够的标签往往费用昂贵,而且耗费时间。为了解决这一问题,鉴于从类似性质和不同领域获得的大量标签数据,域适应往往提供一个有吸引力的选项。现有方法主要对单个结构所提取的演示品分布进行统一,演示品可能只包含部分信息,例如,只包含饱和度、亮度和光度的部分信息。沿着这条线,我们提议多代表制适应,可大幅提高跨场图像分类的分类准确度,并特别旨在协调由名为“感知适应模块”的混合结构所提取的多个演示品的分布。在此基础上,我们介绍多代表制适应网络(MRAN),以便通过多代表制调整完成跨界图像分类任务,从而能够捕捉不同方面的信息。此外,我们扩大最大比例差异性(MMMD)来计算适应损失。我们的方法可以通过扩大大多数向前模型与IMAM(IAM)一起实施,并且网络可以通过背向式适应模块(IMAM)进行高效的训练。在三个基准图像系统/MRAN学习中进行了测试。