Feature alignment is an approach to improving robustness to distribution shift that matches the distribution of feature activations between the training distribution and test distribution. A particularly simple but effective approach to feature alignment involves aligning the batch normalization statistics between the two distributions in a trained neural network. This technique has received renewed interest lately because of its impressive performance on robustness benchmarks. However, when and why this method works is not well understood. We investigate the approach in more detail and identify several limitations. We show that it only significantly helps with a narrow set of distribution shifts and we identify several settings in which it even degrades performance. We also explain why these limitations arise by pinpointing why this approach can be so effective in the first place. Our findings call into question the utility of this approach and Unsupervised Domain Adaptation more broadly for improving robustness in practice.
翻译:功能调整是提高分布变化稳健性的一种方法,它与培训分布和测试分布之间特性启动的分布相匹配。一个特别简单但有效的特征调整方法涉及将两种分布在经过培训的神经网络中的批量正常化统计统一起来。由于该技术在稳健性基准方面的表现令人印象深刻,最近又重新引起兴趣。然而,这一方法何时和为什么起作用都没有得到很好地理解。我们更详细地研究该方法,并找出若干局限性。我们表明,该方法仅对狭小的分布转移起到很大的帮助作用,我们找出了它甚至降低性能的若干环境。我们还解释了为什么这些限制产生的原因,首先通过确定这一方法为什么能够如此有效。我们的调查结果使人们质疑这一方法的效用,以及更广义的无人监督的Domain适应,以提高实践中的稳健性。