In general, an experimental environment for deep learning assumes that the training and the test dataset are sampled from the same distribution. However, in real-world situations, a difference in the distribution between two datasets, domain shift, may occur, which becomes a major factor impeding the generalization performance of the model. The research field to solve this problem is called domain generalization, and it alleviates the domain shift problem by extracting domain-invariant features explicitly or implicitly. In recent studies, contrastive learning-based domain generalization approaches have been proposed and achieved high performance. These approaches require sampling of the negative data pair. However, the performance of contrastive learning fundamentally depends on quality and quantity of negative data pairs. To address this issue, we propose a new regularization method for domain generalization based on contrastive learning, self-supervised contrastive regularization (SelfReg). The proposed approach use only positive data pairs, thus it resolves various problems caused by negative pair sampling. Moreover, we propose a class-specific domain perturbation layer (CDPL), which makes it possible to effectively apply mixup augmentation even when only positive data pairs are used. The experimental results show that the techniques incorporated by SelfReg contributed to the performance in a compatible manner. In the recent benchmark, DomainBed, the proposed method shows comparable performance to the conventional state-of-the-art alternatives. Codes are available at https://github.com/dnap512/SelfReg.
翻译:一般来说,深层学习的实验环境假定培训和测试数据集是同一分布的样本,但是,在现实世界中,两个数据集之间的分布可能出现差异,即域变换,这可能成为妨碍模型普遍化业绩的一个主要因素。解决这一问题的研究领域称为域一般化,它通过明确或隐含地提取域异差特征来缓解域变换问题。在最近的研究中,提出了反向学习的通用域别51 方法,并取得了很高的性能。这些方法要求抽样否定数据配对。然而,对比学习的性能基本上取决于负数据配对的质量和数量。为了解决这一问题,我们提议了一种基于对比性学习、自我超强反向反向对照调节的域一般化的新的规范化方法。拟议方法仅使用正数据配对,从而解决了负对抽样造成的各种问题。我们提议了一个等级特定的域间复变化层(CDPL),这样就有可能有效地应用混和式加增,即使只有正式数据配对数据,也只能使用可比较性性化的方法。