Most recent test-time adaptation methods focus on only classification tasks, use specialized network architectures, destroy model calibration or rely on lightweight information from the source domain. To tackle these issues, this paper proposes a novel Test-time Self-Learning method with automatic Adversarial augmentation dubbed TeSLA for adapting a pre-trained source model to the unlabeled streaming test data. In contrast to conventional self-learning methods based on cross-entropy, we introduce a new test-time loss function through an implicitly tight connection with the mutual information and online knowledge distillation. Furthermore, we propose a learnable efficient adversarial augmentation module that further enhances online knowledge distillation by simulating high entropy augmented images. Our method achieves state-of-the-art classification and segmentation results on several benchmarks and types of domain shifts, particularly on challenging measurement shifts of medical images. TeSLA also benefits from several desirable properties compared to competing methods in terms of calibration, uncertainty metrics, insensitivity to model architectures, and source training strategies, all supported by extensive ablations. Our code and models are available on GitHub.
翻译:最近的测试时间自适应方法仅专注于分类任务,使用专业的网络架构,破坏模型校准,或依靠源域中的轻量级信息。为了解决这些问题,本文提出了一种名为TeSLA的全新测试时间自学习方法,其中引入了全新的测试时间损失函数通过与互信息和在线知识蒸馏之间的隐式紧密联系。此外,我们还提出了一种可学习的有效对抗增强模块,通过模拟高熵增强图像进一步增强在线知识蒸馏的效果。我们的方法在多个基准和类型的领域转移上实现了最先进的分类和分割结果,特别是在医学图像的挑战性测量转移方面。TeSLA具有与竞争方法相比的几个优点,在校准、不确定性指标、对模型架构和源培训策略的不敏感性方面都得到了大量的阐述。我们的代码和模型在GitHub上可用。