Despite recent advancements in deep learning, deep networks still suffer from performance degradation when they face new and different data from their training distributions. Addressing such a problem, test-time adaptation (TTA) aims to adapt a model to unlabeled test data on test time while making predictions simultaneously. TTA applies to pretrained networks without modifying their training procedures, which enables to utilize the already well-formed source distribution for adaptation. One possible approach is to align the representation space of test samples to the source distribution (\textit{i.e.,} feature alignment). However, performing feature alignments in TTA is especially challenging in that the access to labeled source data is restricted during adaptation. That is, a model does not have a chance to learn test data in a class-discriminative manner, which was feasible in other adaptation tasks (\textit{e.g.,} unsupervised domain adaptation) via supervised loss on the source data. Based on such an observation, this paper proposes \emph{a simple yet effective} feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which 1) encourages a model to learn target representations in a class-discriminative manner and 2) effectively mitigates the distribution shifts in test time, simultaneously. Our method does not require any hyper-parameters or additional losses, which are required in the previous approaches. We conduct extensive experiments and show our proposed method consistently outperforms existing baselines.
翻译:尽管最近深层学习有所进展,但深层网络在面临培训分布提供的新数据和不同数据时仍面临性能退化。 解决这样一个问题,测试-时间适应(TTA)旨在将模型与测试时间的无标签测试数据同时进行预测。TTA适用于预先培训的网络,但不修改其培训程序,从而能够利用已经完善的源分布进行适应。一种可能的办法是通过监管源数据的损失将测试样品的显示空间与源分布(\ textit{i.e.e.}特征对齐)相匹配。然而,在适应期间限制使用标签源数据时,TTA的特征调整尤其具有挑战性。 也就是说,模型没有机会以分级差异方式学习测试数据,而同时进行测试。 在其他适应任务(\textit{e.e.}无监管的域适应)中,通过监管源数据的损失,将测试样本的空间与源的分布(\emph{{a) 特征调整简单而有效。 本文建议, 特征调整损失被称为分类-了解源代码对齐的调整(CAFA),在适应(CAFAFA)期间没有有效地),这鼓励以分解方式进行广泛的分析方法,从而在以往的模型中学习模式中,要求的扩展方法要求任何前测试。