Despite their recent success, deep neural networks continue to perform poorly when they encounter distribution shifts at test time. Many recently proposed approaches try to counter this by aligning the model to the new distribution prior to inference. With no labels available this requires unsupervised objectives to adapt the model on the observed test data. In this paper, we propose Test-Time Self-Training (TeST): a technique that takes as input a model trained on some source data and a novel data distribution at test time, and learns invariant and robust representations using a student-teacher framework. We find that models adapted using TeST significantly improve over baseline test-time adaptation algorithms. TeST achieves competitive performance to modern domain adaptation algorithms, while having access to 5-10x less data at time of adaption. We thoroughly evaluate a variety of baselines on two tasks: object detection and image segmentation and find that models adapted with TeST. We find that TeST sets the new state-of-the art for test-time domain adaptation algorithms.
翻译:尽管最近取得了成功,但深神经网络在测试时遇到分布变化时仍然表现不佳。许多最近提出的办法试图通过将模型与推理前的新分布法相匹配来应对这一点。由于没有标签,因此需要不经监督的目标来调整观察到的测试数据模型。在本文中,我们提议试验时自我培训:一种技术将一些源数据经过培训的模型和测试时的新数据分布作为输入,并利用一个学生-教师框架来学习变化无常和稳健的表述。我们发现,采用TeST的模型大大改进了基准测试-时间适应算法。TeST取得了现代域适应算法的竞争性性能,同时在调整时获得了5-10x更少的数据。我们彻底评估了两个任务上的各种基线:物体探测和图像分割,并找到了与TeST相适应的模型。我们发现,TeST设置了测试-时间域适应算法的新状态。