Domain Adaptation (DA) attempts to transfer knowledge learned in the labeled source domain to the unlabeled but related target domain without requiring large amounts of target supervision. Recent advances in DA mainly proceed by aligning the source and target distributions. Despite the significant success, the adaptation performance still degrades accordingly when the source and target domains encounter a large distribution discrepancy. We consider this limitation may attribute to the insufficient exploration of domain-specialized features because most studies merely concentrate on domain-general feature learning in task-specific layers and integrate totally-shared convolutional networks (convnets) to generate common features for both domains. In this paper, we relax the completely-shared convnets assumption adopted by previous DA methods and propose Domain Conditioned Adaptation Network (DCAN), which introduces domain conditioned channel attention module with a multi-path structure to separately excite channel activation for each domain. Such a partially-shared convnets module allows domain-specialized features in low-level to be explored appropriately. Further, given the knowledge transferability varying along with convolutional layers, we develop Generalized Domain Conditioned Adaptation Network (GDCAN) to automatically determine whether domain channel activations should be separately modeled in each attention module. Afterward, the critical domain-specialized knowledge could be adaptively extracted according to the domain statistic gaps. As far as we know, this is the first work to explore the domain-wise convolutional channel activations separately for deep DA networks. Additionally, to effectively match high-level feature distributions across domains, we consider deploying feature adaptation blocks after task-specific layers, which can explicitly mitigate the domain discrepancy.
翻译:域适应 (DA) 试图将标签源域中的知识传授给未贴标签但相关的目标域,而不需要大量目标监督。 DA 最近的进展主要是通过调整源和目标分布来推进。尽管取得了显著的成功,但是当源和目标域遇到巨大的分布差异时,适应性能仍然相应降低。我们认为,这一限制可能归因于对域专用特性的探索不足,因为大多数研究仅仅侧重于特定任务层的域域特性学习,并整合完全共享的飞动网络(connets),为这两个域创造共同的特性。在本文中,我们放松了前DA方法采用的完全共享的 convnet假设,并提议DADA 配置的适应网络(DCAN) 。DCPA 引入了域域内受限制的频道注意模块,并采用多路结构,单独启动外端通道。这种部分共享的 convnets模块允许适当探索低级域域域域域域域特性。此外,鉴于知识的可转换性差异,我们应开发通用的DODA 域域域域域域域域域域域域域域内的共享差异,我们可以独立地将快速进行升级化。