Though impressive success has been witnessed in computer vision, deep learning still suffers from the domain shift challenge when the target domain for testing and the source domain for training do not share an identical distribution. To address this, domain generalization approaches intend to extract domain invariant features that can lead to a more robust model. Hence, increasing the source domain diversity is a key component of domain generalization. Style augmentation takes advantage of instance-specific feature statistics containing informative style characteristics to synthetic novel domains. However, all previous works ignored the correlation between different feature channels or only limited the style augmentation through linear interpolation. In this work, we propose a novel augmentation method, called \textit{Correlated Style Uncertainty (CSU)}, to go beyond the linear interpolation of style statistic space while preserving the essential correlation information. We validate our method's effectiveness by extensive experiments on multiple cross-domain classification tasks, including widely used PACS, Office-Home, Camelyon17 datasets and the Duke-Market1501 instance retrieval task and obtained significant margin improvements over the state-of-the-art methods. The source code is available for public use.
翻译:尽管在计算机愿景方面取得了令人印象深刻的成功,但深层次的学习仍然受到当测试目标领域和培训源领域不具有相同分布时的域变挑战的域变挑战。 为了解决这个问题, 域一般化方法打算提取出能够导致更稳健模型的域变异特性。 因此, 增加源域多样性是域化的一个关键组成部分。 样式增强利用包含信息风格特征的具体实例特征统计,将其应用于合成的新领域。 但是, 以往的所有工作都忽略了不同特性渠道之间的关联,或者仅通过线性内插限制样式增强。 在这项工作中, 我们提议了一种新颖的增强方法,叫做\ textitit{Corlated Style Contility (CSUP), 以超越样式统计空间的线性间插图,同时保存基本的相关性信息。 我们通过在多个跨域分类任务上的广泛实验, 包括广泛使用的 PACS、 Office-Home、 Camelyon17 数据集和 Duke- Market501 例检索任务, 并获得了关于州级方法的显著的边际改进。 。 源码代码可供公众使用。