Domain adaptation has been widely explored by transferring the knowledge from a label-rich source domain to a related but unlabeled target domain. Most existing domain adaptation algorithms attend to adapting feature representations across two domains with the guidance of a shared source-supervised classifier. However, such classifier limits the generalization ability towards unlabeled target recognition. To remedy this, we propose a Transferable Semantic Augmentation (TSA) approach to enhance the classifier adaptation ability through implicitly generating source features towards target semantics. Specifically, TSA is inspired by the fact that deep feature transformation towards a certain direction can be represented as meaningful semantic altering in the original input space. Thus, source features can be augmented to effectively equip with target semantics to train a more transferable classifier. To achieve this, for each class, we first use the inter-domain feature mean difference and target intra-class feature covariance to construct a multivariate normal distribution. Then we augment source features with random directions sampled from the distribution class-wisely. Interestingly, such source augmentation is implicitly implemented through an expected transferable cross-entropy loss over the augmented source distribution, where an upper bound of the expected loss is derived and minimized, introducing negligible computational overhead. As a light-weight and general technique, TSA can be easily plugged into various domain adaptation methods, bringing remarkable improvements. Comprehensive experiments on cross-domain benchmarks validate the efficacy of TSA.
翻译:通过将知识从标签丰富源域转移到相关但未加标签的目标域,广泛探索了校内适应性。大多数现有域适应算法在共享源监督的分类师的指导下,关注在两个域调整地貌表示方式。然而,这种分类法将一般化能力限制在未加标签的目标识别上。为了解决这个问题,我们建议采用可转让的语义增强(TSA)方法,通过隐含生成源特性,提高分类的适应能力,使其适应目标语义学。具体地说,TSA的灵感来自以下事实:向某个方向的深度地貌变化可以表现为在原始输入空间中有意义的语义改变。因此,源特性可以扩大,以有效装备目标语义,培训一个更可转让的分类师级。为了达到这个目的,我们首先使用一种可转让的语义表示差异和目标类内特性变异性(TSA)方法,以构建一个多变异的正常分布。然后我们增加源特性,从分布类中抽样的随机方向。有趣的是,通过预期的可转移的跨项结构改变性变异性变变变变异性变变变变变变变变变变变变变的原,这种源增强,这种源增强是隐隐隐化地执行,通过一个预期的源积,在预期的跨性变变变变变变变变后在可变后变变后在可变变后变后将可变后在可变变后在扩展的高级的高级的高级技术的扩展后将可变变后在扩展变后在可变后在扩展变后在可变后在扩展后在扩展后在扩展的高级计算的轨道内,将可变式的高级计算法的高级计算中,将可变化的变式的变式的变后,将可变式的高级的变式的高级技术的变式的变式的变式的变式的变式中,将可变式上,将可变后变式的变式的变式将可变式的变后在可变后在可变式的变后在可变式的变式的变式的变式的变式上将可变式的变式的变式的变式的变制制算法的变制的变制的变制的变制的变制的变制的