In this paper, we tackle the problem of domain shift. Most existing methods perform training on multiple source domains using a single model, and the same trained model is used on all unseen target domains. Such solutions are sub-optimal as each target domain exhibits its own specialty, which is not adapted. Furthermore, expecting single-model training to learn extensive knowledge from multiple source domains is counterintuitive. The model is more biased toward learning only domain-invariant features and may result in negative knowledge transfer. In this work, we propose a novel framework for unsupervised test-time adaptation, which is formulated as a knowledge distillation process to address domain shift. Specifically, we incorporate Mixture-of-Experts (MoE) as teachers, where each expert is separately trained on different source domains to maximize their specialty. Given a test-time target domain, a small set of unlabeled data is sampled to query the knowledge from MoE. As the source domains are correlated to the target domains, a transformer-based aggregator then combines the domain knowledge by examining the interconnection among them. The output is treated as a supervision signal to adapt a student prediction network toward the target domain. We further employ meta-learning to enforce the aggregator to distill positive knowledge and the student network to achieve fast adaptation. Extensive experiments demonstrate that the proposed method outperforms the state-of-the-art and validates the effectiveness of each proposed component. Our code is available at https://github.com/n3il666/Meta-DMoE.
翻译:在本文中, 我们处理域变问题。 大多数现有方法都使用单一模型在多个源域上进行培训, 并且在所有未知的目标域中使用同样的经过培训的模式。 这些解决方案是亚最佳的, 因为每个目标域显示自己的专长, 而没有对其进行调整。 此外, 期待从多个源域学习广泛知识的单一模型培训是反直觉的。 该模型更偏向于只学习域异性特性, 并可能导致负面的知识转移。 在这项工作中, 我们提议了一个不受监督的测试时间适应新框架, 该框架是作为知识蒸馏过程来开发的, 用于处理域变换。 具体地说, 我们把Mixture- Explerts (MoE) 作为教师, 每个专家在不同的源域上分别接受培训, 以最大限度地扩大自己的专长。 由于测试时间范围, 一小组没有标签的数据被抽样, 以便向教育部查询知识。 由于源域与目标域相关关系, 一个基于变换数据的聚合器, 然后将域知识合并起来, 检查它们之间的互连接。 将输出作为一个测试信号处理, 将每个专家网络的元数 升级到学生的校正的校程 。 。 将一个校正的校正的校正的校正 学习 。 我们的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正 。