Domain adaptation (DA) becomes an up-and-coming technique to address the insufficient or no annotation issue by exploiting external source knowledge. Existing DA algorithms mainly focus on practical knowledge transfer through domain alignment. Unfortunately, they ignore the fairness issue when the auxiliary source is extremely imbalanced across different categories, which results in severe under-presented knowledge adaptation of minority source set. To this end, we propose a Towards Fair Knowledge Transfer (TFKT) framework to handle the fairness challenge in imbalanced cross-domain learning. Specifically, a novel cross-domain mixup generation is exploited to augment the minority source set with target information to enhance fairness. Moreover, dual distinct classifiers and cross-domain prototype alignment are developed to seek a more robust classifier boundary and mitigate the domain shift. Such three strategies are formulated into a unified framework to address the fairness issue and domain shift challenge. Extensive experiments over two popular benchmarks have verified the effectiveness of our proposed model by comparing to existing state-of-the-art DA models, and especially our model significantly improves over 20% on two benchmarks in terms of the overall accuracy.
翻译:域适应(DA) 成为通过利用外部来源知识来解决不足或没有说明问题的先进技术(DA) 。现有的DA算法主要侧重于通过域对齐的实际知识转让。不幸的是,当辅助来源在不同类别之间极不平衡时,它们忽略了公平问题,导致对少数源的高度偏差,从而导致对少数源的高度偏差。为此,我们提议了一个公平知识转让框架,以应对跨域学习不平衡的公平挑战。具体地说,利用新的跨域混合一代来扩大有目标信息的少数源集,以加强公平性。此外,还开发了双重不同的分类和跨域原型对齐,以寻求更稳健的分类者边界和减轻域变。这三项战略被制定成一个统一框架,以解决公平问题和域变挑战。对两个普遍基准的广泛试验通过比较现有的最新的DA模型,证实了我们拟议模型的有效性,特别是我们的模型在总体准确性两个基准上大大改进了20%以上。