Supervised learning methods trained with maximum likelihood objectives often overfit on training data. Most regularizers that prevent overfitting look to increase confidence on additional examples (e.g., data augmentation, adversarial training), or reduce it on training data (e.g., label smoothing). In this work we propose a complementary regularization strategy that reduces confidence on self-generated examples. The method, which we call RCAD (Reducing Confidence along Adversarial Directions), aims to reduce confidence on out-of-distribution examples lying along directions adversarially chosen to increase training loss. In contrast to adversarial training, RCAD does not try to robustify the model to output the original label, but rather regularizes it to have reduced confidence on points generated using much larger perturbations than in conventional adversarial training. RCAD can be easily integrated into training pipelines with a few lines of code. Despite its simplicity, we find on many classification benchmarks that RCAD can be added to existing techniques (e.g., label smoothing, MixUp training) to increase test accuracy by 1-3% in absolute value, with more significant gains in the low data regime. We also provide a theoretical analysis that helps to explain these benefits in simplified settings, showing that RCAD can provably help the model unlearn spurious features in the training data.
翻译:在这项工作中,我们提议了一项补充性规范化战略,降低对自生实例的信心。我们称之为RCAD(减少信任和反向指示)的方法,目的是降低对沿着对抗性选择的方向展示分配外实例以增加培训损失的信心。与对抗性培训相比,RCAD并不试图将模型固化为1-3 %的绝对值,而在低数据制度中则提供了更显著的收益。 我们还提供了一种理论分析,在低数据制度中展示了这种理论分析,从而推动了这种分析。