Deep Neural Networks (DNNs) suffer from domain shift when the test dataset follows a distribution different from the training dataset. Domain generalization aims to tackle this issue by learning a model that can generalize to unseen domains. In this paper, we propose a new approach that aims to explicitly remove domain-specific features for domain generalization. Following this approach, we propose a novel framework called Learning and Removing Domain-specific features for Generalization (LRDG) that learns a domain-invariant model by tactically removing domain-specific features from the input images. Specifically, we design a classifier to effectively learn the domain-specific features for each source domain, respectively. We then develop an encoder-decoder network to map each input image into a new image space where the learned domain-specific features are removed. With the images output by the encoder-decoder network, another classifier is designed to learn the domain-invariant features to conduct image classification. Extensive experiments demonstrate that our framework achieves superior performance compared with state-of-the-art methods.
翻译:当测试数据集的分布与培训数据集不同时,深神经网络(DNNS)会受到域变换的影响。当测试数据集遵循不同于培训数据集的分布时,域一般化的目的是通过学习一个可以向无形域推广的模式来解决这一问题。在本文件中,我们提出了一个新的方法,旨在明确删除域特定特性,供域一般化使用。根据这个方法,我们提议了一个名为 " 学习和删除通用域特定特性 " 的新框架,通过战术方式从输入图像中去除特定域特性来学习一个域异性模型。具体地说,我们设计了一个分类器,以有效学习每个源域的域特性。然后我们开发了一个编码器-解码器网络,将每个输入图像映射成一个新的图像空间,其中删除了所学过的域特定特性。根据编码解码网络的图像输出,我们设计了另一个分类器,以学习用于进行图像分类的域异性特性。广泛的实验表明,我们的框架与最先进的方法相比,其性能更高。