Nowadays, deep neural networks outperform humans in many tasks. However, if the input distribution drifts away from the one used in training, their performance drops significantly. Recently published research has shown that adapting the model parameters to the test sample can mitigate this performance degradation. In this paper, we therefore propose a novel modification of the self-supervised training algorithm SwAV that adds the ability to adapt to single test samples. Using the provided prototypes of SwAV and our derived test-time loss, we align the representation of unseen test samples with the self-supervised learned prototypes. We show the success of our method on the common benchmark dataset CIFAR10-C.
翻译:目前,深层神经网络在许多任务中比人类表现要好。然而,如果输入分布偏离培训所用的输入分布,其性能就会显著下降。最近公布的研究表明,将模型参数调整到试样中可以减轻这种性能退化。因此,在本文件中,我们提议对自我监督的培训算法SwaV进行新的修改,从而增加适应单一测试样品的能力。我们利用SwaV的原型和我们所获得的测试时间损失,将隐性测试样品的表示与自监督的原型相匹配。我们展示了我们在通用基准数据集CIFAR10-C上的成功方法。