Contrastive learning (CL) has recently been applied to adversarial learning tasks. Such practice considers adversarial samples as additional positive views of an instance, and by maximizing their agreements with each other, yields better adversarial robustness. However, this mechanism can be potentially flawed, since adversarial perturbations may cause instance-level identity confusion, which can impede CL performance by pulling together different instances with separate identities. To address this issue, we propose to treat adversarial samples unequally when contrasted, with an asymmetric InfoNCE objective ($A-InfoNCE$) that allows discriminating considerations of adversarial samples. Specifically, adversaries are viewed as inferior positives that induce weaker learning signals, or as hard negatives exhibiting higher contrast to other negative samples. In the asymmetric fashion, the adverse impacts of conflicting objectives between CL and adversarial learning can be effectively mitigated. Experiments show that our approach consistently outperforms existing Adversarial CL methods across different finetuning schemes without additional computational cost. The proposed A-InfoNCE is also a generic form that can be readily extended to other CL methods. Code is available at https://github.com/yqy2001/A-InfoNCE.
翻译:最近对对抗性学习(CL)运用于对抗性学习任务,这种做法认为对抗性样本是对一例的补充正面观点,通过尽量扩大彼此之间的协议,可以产生更好的对抗性强力;然而,这一机制可能存在潜在的缺陷,因为对抗性干扰可能造成试级身份混乱,从而通过将不同身份的不同情况汇集在一起,从而妨碍CL工作;为解决这一问题,我们提议对对抗性样本进行不平等的处理,而采用不对称的信息NCE目标(A-InfONCE$),允许区别对待对抗性样本。具体地说,将对手视为低级阳性,从而导致学习信号较弱,或作为硬性负性负性,与其他负面样本形成较强的反差。在不对称的情况下,CL和对抗性学习之间相互冲突的目标所产生的不利影响是可以有效减轻的。实验表明,我们的做法始终超越了不同微调计划的现有Aversari CL方法,而无需额外的计算费用。拟议的A-InfONCE是一种通用形式,可以很容易推广到CL方法中。