This paper presents a simple yet effective approach that improves continual test-time adaptation (TTA) in a memory-efficient manner. TTA may primarily be conducted on edge devices with limited memory, so reducing memory is crucial but has been overlooked in previous TTA studies. In addition, long-term adaptation often leads to catastrophic forgetting and error accumulation, which hinders applying TTA in real-world deployments. Our approach consists of two components to address these issues. First, we present lightweight meta networks that can adapt the frozen original networks to the target domain. This novel architecture minimizes memory consumption by decreasing the size of intermediate activations required for backpropagation. Second, our novel self-distilled regularization controls the output of the meta networks not to deviate significantly from the output of the frozen original networks, thereby preserving well-trained knowledge from the source domain. Without additional memory, this regularization prevents error accumulation and catastrophic forgetting, resulting in stable performance even in long-term test-time adaptation. We demonstrate that our simple yet effective strategy outperforms other state-of-the-art methods on various benchmarks for image classification and semantic segmentation tasks. Notably, our proposed method with ResNet-50 and WideResNet-40 takes 86% and 80% less memory than the recent state-of-the-art method, CoTTA.
翻译:本文展示了一种简单而有效的方法,以记忆效率的方式改进持续测试时间适应(TTA),TTA可能主要在边缘设备上进行,但记忆有限,因此减少记忆至关重要,但在以前的TTA研究中被忽略。此外,长期适应往往导致灾难性的遗忘和错误积累,这妨碍了在现实世界部署中应用TTA。我们的方法包括两个组成部分来解决这些问题。首先,我们展示了能够使冻结的原始网络适应目标域的轻量级元网络。这个新的结构通过减少对回馈所需的中间激活的大小而将记忆消耗最小化。第二,我们新的自我更新的正规化控制了元网络的产出,使其不明显偏离冻结的原始网络的产出,从而从源域中保留了经过良好训练的知识。没有额外的记忆,这种正规化可以防止错误积累和灾难性的遗忘,导致即使是长期测试时间适应的稳定性。我们简单而有效的战略在各种图像分类和语义分割任务的基准上也比其他状态最先进的方法差。第二,我们新的自我更新的元网络输出控制了元网络的产出,最近用不到Res网法。</s>