In recent years, researchers have been paying increasing attention to the threats brought by deep learning models to data security and privacy, especially in the field of domain adaptation. Existing unsupervised domain adaptation (UDA) methods can achieve promising performance without transferring data from source domain to target domain. However, UDA with representation alignment or self-supervised pseudo-labeling relies on the transferred source models. In many data-critical scenarios, methods based on model transferring may suffer from membership inference attacks and expose private data. In this paper, we aim to overcome a challenging new setting where the source models cannot be transferred to the target domain. We propose Domain Adaptation without Source Model, which refines information from source model. In order to gain more informative results, we further propose Distributionally Adversarial Training (DAT) to align the distribution of source data with that of target data. Experimental results on benchmarks of Digit-Five, Office-Caltech, Office-31, Office-Home, and DomainNet demonstrate the feasibility of our method without model transferring.
翻译:近年来,研究人员越来越关注深层次学习模型对数据安全和隐私造成的威胁,特别是在领域适应领域; 现有的未经监督的域适应方法可以在不将数据从源域转移到目标域的情况下取得有希望的绩效; 然而,具有代表性调整或自我监督的假标签的UDA依靠的是转移的来源模型; 在许多数据危急的假设中,基于模式转移的方法可能因会员推论攻击而受到影响,并暴露私人数据; 本文旨在克服一种具有挑战性的新环境,即源模型无法转移到目标域; 我们提议不采用源代码模型的域域适应方法,以完善来源模型中的信息; 为了获得更多的信息结果,我们进一步提议以分布式辅助性培训(DAT)使源数据的分配与目标数据相一致。 Digit-Five、Office-Caltech、Office-31、Office-Home和DomainNet的基准实验结果显示我们方法的可行性,而不采用模式转让。