While many real-world data streams imply that they change frequently in a nonstationary way, most of deep learning methods optimize neural networks on training data, and this leads to severe performance degradation when dataset shift happens. However, it is less possible to annotate or inspect newly streamed data by humans, and thus it is desired to measure model drift at inference time in an unsupervised manner. In this paper, we propose a novel method of model drift estimation by exploiting statistics of batch normalization layer on unlabeled test data. To remedy possible sampling error of streamed input data, we adopt low-rank approximation to each representational layer. We show the effectiveness of our method not only on dataset shift detection but also on model selection when there are multiple candidate models among model zoo or training trajectories in an unsupervised way. We further demonstrate the consistency of our method by comparing model drift scores between different network architectures.
翻译:虽然许多真实世界的数据流意味着它们经常以非静止的方式变化,但大多数深层次的学习方法优化了培训数据神经网络,这导致发生数据集转移时出现严重的性能退化。然而,人类对新流数据进行笔记或检查的可能性较小,因此,人们希望以不受监督的方式测量在推论时间的模型漂移。在本文中,我们提出一种新的模型漂移估计方法,利用未贴标签的测试数据上的批量正常化层统计数据。为了补救流出输入数据可能的抽样错误,我们采用了对每个代表层的低位近似值。我们不仅在数据集移位探测上展示了我们的方法的有效性,而且在模型动物园或培训轨迹中存在多种候选模型时也展示了模型选择的有效性。我们进一步通过比较不同网络结构之间的模型漂移分数来展示我们方法的一致性。