Improving model's generalizability against domain shifts is crucial, especially for safety-critical applications such as autonomous driving. Real-world domain styles can vary substantially due to environment changes and sensor noises, but deep models only know the training domain style. Such domain style gap impedes model generalization on diverse real-world domains. Our proposed Normalization Perturbation (NP) can effectively overcome this domain style overfitting problem. We observe that this problem is mainly caused by the biased distribution of low-level features learned in shallow CNN layers. Thus, we propose to perturb the channel statistics of source domain features to synthesize various latent styles, so that the trained deep model can perceive diverse potential domains and generalizes well even without observations of target domain data in training. We further explore the style-sensitive channels for effective style synthesis. Normalization Perturbation only relies on a single source domain and is surprisingly effective and extremely easy to implement. Extensive experiments verify the effectiveness of our method for generalizing models under real-world domain shifts.
翻译:改进模型相对于域变的通用性至关重要,特别是对于安全关键应用,例如自主驱动。由于环境变化和感应器噪音,现实世界域名风格可能有很大差异,但深层次模型只了解培训域名风格。这种域名风格的缺口阻碍不同现实域域域的模型通用性。我们提议的正常化渗透(NP)可以有效地克服这个域名风格过于适合的问题。我们发现,这个问题主要是在浅层CNN中学习的低级别特征分布偏差造成的。因此,我们提议对源域域特征的频道统计进行仔细检查,以综合各种潜在样式,这样,经过训练的深层模型可以发现不同的潜在域,即使没有在培训中观测目标域数据,也能很好地加以概括。我们进一步探索对风格敏感的渠道,以便进行有效的风格合成。正常化渗透仅仅依靠单一源域,而且非常有效,而且非常容易执行。因此,我们提出的广泛实验可以核实在现实域变换模式下推广模式的方法的有效性。