While neural networks are capable of achieving human-like performance in many tasks such as image classification, the impressive performance of each model is limited to its own dataset. Source-free domain adaptation (SFDA) was introduced to address knowledge transfer between different domains in the absence of source data, thus, increasing data privacy. Diversity in representation space can be vital to a model`s adaptability in varied and difficult domains. In unsupervised SFDA, the diversity is limited to learning a single hypothesis on the source or learning multiple hypotheses with a shared feature extractor. Motivated by the improved predictive performance of ensembles, we propose a novel unsupervised SFDA algorithm that promotes representational diversity through the use of separate feature extractors with Distinct Backbone Architectures (DBA). Although diversity in feature space is increased, the unconstrained mutual information (MI) maximization may potentially introduce amplification of weak hypotheses. Thus we introduce the Weak Hypothesis Penalization (WHP) regularizer as a mitigation strategy. Our work proposes Penalized Diversity (PD) where the synergy of DBA and WHP is applied to unsupervised source-free domain adaptation for covariate shift. In addition, PD is augmented with a weighted MI maximization objective for label distribution shift. Empirical results on natural, synthetic, and medical domains demonstrate the effectiveness of PD under different distributional shifts.
翻译:虽然神经网络在许多任务中如图像分类方面能够实现类似于人类的性能,但是每个模型的卓越性能都限于其自己的数据集。自由源域适应(SFDA)旨在解决不同域之间的知识转移问题,从而提高数据隐私。在表示空间中的多样性对于模型适应各种困难领域至关重要。在无监督的SFDA中,多样性仅限于在源数据上学习单个假设或学习具有共享特征提取器的多个假设。受集合改进预测性能的启发,我们提出了一种新的无监督SFDA算法,通过使用带有独特骨架结构(DBA)的单独特征提取器来促进表示多样性。尽管特征空间中的多样性有所增加,但是无约束的互信息(MI)最大化可能会潜在引入弱假设放大。因此,我们引入弱假设惩罚(WHP)正则化器作为缓解策略。我们的工作提出了惩罚多样性(PD),其中将DBA和WHP的协同应用于无监督自由源域适应以进行协变量偏移。此外,PD还采用了带权MI最大化目标以处理标签分布偏移。在自然、合成和医学领域的实证结果表明,PD在不同分布移位下均表现出有效性。