For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. Previous deep domain adaptation methods mainly learn a global domain shift, i.e., align the global source and target distributions without considering the relationships between two subdomains within the same category of different domains, leading to unsatisfying transfer learning performance without capturing the fine-grained information. Recently, more and more researchers pay attention to Subdomain Adaptation which focuses on accurately aligning the distributions of the relevant subdomains. However, most of them are adversarial methods which contain several loss functions and converge slowly. Based on this, we present Deep Subdomain Adaptation Network (DSAN) which learns a transfer network by aligning the relevant subdomain distributions of domain-specific layer activations across different domains based on a local maximum mean discrepancy (LMMD). Our DSAN is very simple but effective which does not need adversarial training and converges fast. The adaptation can be achieved easily with most feed-forward network models by extending them with LMMD loss, which can be trained efficiently via back-propagation. Experiments demonstrate that DSAN can achieve remarkable results on both object recognition tasks and digit classification tasks. Our code will be available at: https://github.com/easezyc/deep-transfer-learning
翻译:对于没有标签数据的目标任务, 域适应可以从不同的源域中转移一位学习者。 以往的深域适应方法主要学习全球域变换, 即, 校对全球源和目标分布, 而不考虑同一类别不同域内两个子域之间的关系, 导致不满意的转移学习性能, 而不捕捉细微偏差信息 。 最近, 越来越多的研究人员关注子域适应, 重点是准确调整相关子域域的分布。 但是, 其中大多数是包含若干损失功能并慢慢趋近的对抗性方法。 基于此, 我们展示深子域适应网络( DSAN), 通过对本地最大平均差异( LMD), 来学习相关子域内分域内分域间共享的转移网络, 导致不满意的转移性学习性功能。 我们的 DSAN 实验非常简单, 但有效, 不需要对抗性培训, 并快速地连接。 通过大多数反馈前网络模型很容易实现适应, 将LMDAND 损失与LMAN 损失扩大, 目标可以通过后方对目标进行高效的培训, 通过后方对等 任务进行 Dalevatealationalationalationalationalation 。 。 lax be salus lave lax lax lax lax