Theoretically, domain adaptation is a well-researched problem. Further, this theory has been well-used in practice. In particular, we note the bound on target error given by Ben-David et al. (2010) and the well-known domain-aligning algorithm based on this work using Domain Adversarial Neural Networks (DANN) presented by Ganin and Lempitsky (2015). Recently, multiple variants of DANN have been proposed for the related problem of domain generalization, but without much discussion of the original motivating bound. In this paper, we investigate the validity of DANN in domain generalization from this perspective. We investigate conditions under which application of DANN makes sense and further consider DANN as a dynamic process during training. Our investigation suggests that the application of DANN to domain generalization may not be as straightforward as it seems. To address this, we design an algorithmic extension to DANN in the domain generalization case. Our experimentation validates both theory and algorithm.
翻译:从理论上讲,对域的适应是一个研究周密的问题。此外,这一理论在实践中已被很好地使用。特别是,我们注意到Ben-David等人(2010年)给出的目标误差约束,以及Gamin和Lampitsky(2015年)利用Domain Aversarial神经网络(DAN)在这项工作基础上推出的众所周知的域对称算法。最近,DANN的多种变体被提议用于相关域的概括化问题,但没有对最初的激励约束进行多少讨论。在本文中,我们从这个角度来调查DANN在域的通用有效性。我们调查DANN的应用在哪些条件下是合理的,并进一步将DANN视为培训过程中的一个动态过程。我们的调查表明,DANN对域的通用应用可能不那么简单。为了解决这个问题,我们设计了域通用案例DANN的算法扩展。我们的实验验证了理论和算法。