Domain Adaptation (DA) has recently received significant attention due to its potential to adapt a learning model across source and target domains with mismatched distributions. Since DA methods rely exclusively on the given source and target domain samples, they generally yield models that are vulnerable to noise and unable to adapt to unseen samples from the target domain, which calls for DA methods that guarantee the robustness and generalization of the learned models. In this paper, we propose DRDA, a distributionally robust domain adaptation method. DRDA leverages a distributionally robust optimization (DRO) framework to learn a robust decision function that minimizes the worst-case target domain risk and generalizes to any sample from the target domain by transferring knowledge from a given labeled source domain sample. We utilize the Maximum Mean Discrepancy (MMD) metric to construct an ambiguity set of distributions that provably contains the source and target domain distributions with high probability. Hence, the risk is shown to upper bound the out-of-sample target domain loss. Our experimental results demonstrate that our formulation outperforms existing robust learning approaches.
翻译:由于DA方法完全依赖特定源和目标域样本,因此,DA方法通常产生易受噪音影响且无法适应目标域的无形样本的模型,这就要求DA方法能够保证所学模型的稳健性和普遍性。在本文中,我们建议DRDA是一种分布性强的域适应方法。DRDA利用一个分布性强强的域适应方法,利用一个分布性强的优化框架学习一个强有力的决策功能,以尽量减少最坏目标域风险,并通过从某个标签源域样本转让知识,将目标域的样本概括化。我们使用最大值差异度(MMD)衡量标准来构建一套模糊的分布图集,该图集极有可能包含源和目标域分布图。因此,风险显示为标外域损失的上限。我们的实验结果表明,我们的配方优于现有的稳健的学习方法。