Deep learning has achieved great success in the past few years. However, the performance of deep learning is likely to impede in face of non-IID situations. Domain generalization (DG) enables a model to generalize to an unseen test distribution, i.e., to learn domain-invariant representations. In this paper, we argue that domain-invariant features should be originating from both internal and mutual sides. Internal invariance means that the features can be learned with a single domain and the features capture intrinsic semantics of data, i.e., the property within a domain, which is agnostic to other domains. Mutual invariance means that the features can be learned with multiple domains (cross-domain) and the features contain common information, i.e., the transferable features w.r.t. other domains. We then propose DIFEX for Domain-Invariant Feature EXploration. DIFEX employs a knowledge distillation framework to capture the high-level Fourier phase as the internally-invariant features and learn cross-domain correlation alignment as the mutually-invariant features. We further design an exploration loss to increase the feature diversity for better generalization. Extensive experiments on both time-series and visual benchmarks demonstrate that the proposed DIFEX achieves state-of-the-art performance.
翻译:深层学习在过去几年中取得了巨大成功。然而,深层学习的绩效在非二维状态下可能会受到阻碍。 广域化( DG) 使一个模型能够推广到一个看不见的测试分布, 即学习域异性表示。 在本文中, 我们主张, 域异性特征应该来自内部和相互的两面。 内部差异意味着这些特征可以用一个单一的域来学习, 并且其特征含有数据内在的语义, 即一个域内的属性, 与其他域是不可知的。 互换( DGDG) 意味着这些特征可以同多个域( 跨域) 学习, 并包含共同的信息, 即可转移的特性 w.r. t. 。 在本文中, 我们随后建议 DIFEX 由内- 异性表达。 DIFEX 使用一个知识蒸馏框架, 以了解高端四维的阶段, 作为内部异性特征, 并学习交叉的关联性对应关系。 互换意味着这些特征可以与多个领域( 跨域) 学习共同的域( ) 以及包含共同的图像化实验 。 我们设计一个更好的视觉探索模型, 以更好地展示, 实现通用的图像多样性。