Self-supervised contrastive learning has solved one of the significant obstacles in deep learning by alleviating the annotation cost. This advantage comes with the price of false negative-pair selection without any label information. Supervised contrastive learning has emerged as an extension of contrastive learning to eliminate this issue. However, aside from accuracy, there is a lack of understanding about the impacts of adversarial training on the representations learned by these learning schemes. In this work, we utilize supervised learning as a baseline to comprehensively study the robustness of contrastive and supervised contrastive learning under different adversarial training scenarios. Then, we begin by looking at how adversarial training affects the learned representations in hidden layers, discovering more redundant representations between layers of the model. Our results on CIFAR-10 and CIFAR-100 image classification benchmarks demonstrate that this redundancy is highly reduced by adversarial fine-tuning applied to the contrastive learning scheme, leading to more robust representations. However, adversarial fine-tuning is not very effective for supervised contrastive learning and supervised learning schemes. Our code is released at https://github.com/softsys4ai/CL-Robustness.
翻译:自我监督的对比学习通过降低批注成本,解决了深层次学习的一大障碍。这一优势来自没有标签信息而以虚假负面选择为代价的负面选择。监督的对比学习是消除这一问题的对比学习的延伸;然而,除了准确性之外,对于对抗性培训对这些学习计划所学到的表述方式的影响缺乏了解。在这项工作中,我们利用监督性学习作为基准,全面研究不同对抗性培训情景下对比性和监督性对比学习的强健性。然后,我们首先研究对抗性培训如何影响在隐藏层中学习的表达方式,发现模型各层之间有更多的多余的表述。我们在CIRA-10和CIFAR-100图像分类基准上的结果表明,对对比性学习计划应用对抗性微调会大大减少这种冗余,导致更强有力的陈述。然而,对监督性对比性学习和监管性学习计划来说,对抗性微调效果不大。我们的代码在https://github.com/softsys4ai/CLFAR-100图像分类。