Domain adaptation techniques have contributed to the success of deep learning. Leveraging knowledge from an auxiliary source domain for learning in labeled data-scarce target domain is fundamental to domain adaptation. While these techniques result in increasing accuracy, the adaptation process, particularly the knowledge leveraged from the source domain, remains unclear. This paper proposes an explainable by design supervised domain adaptation framework - XSDA-Net. We integrate a case-based reasoning mechanism into the XSDA-Net to explain the prediction of a test instance in terms of similar-looking regions in the source and target train images. We empirically demonstrate the utility of the proposed framework by curating the domain adaptation settings on datasets popularly known to exhibit part-based explainability.
翻译:利用从一个辅助来源领域获得的知识,在标记的数据残缺目标领域进行学习,对于领域适应至关重要。虽然这些技术提高了准确性,但适应过程,特别是从源领域获得的知识,仍然不明确。本文件建议采用设计监督的域适应框架----XSDA-Net, 以设计出一个可解释的域适应框架----XSDA-Net。我们将一个基于案例的推理机制纳入XSDA-Net,以解释对试验实例的预测,说明来源和目标列列图中类似区域的预测。我们从经验上证明拟议框架的效用,在众所周知的数据集中,对域适应设置进行校准,展示部分解释。