Indiscriminate data poisoning attacks are quite effective against supervised learning. However, not much is known about their impact on unsupervised contrastive learning (CL). This paper is the first to consider indiscriminate poisoning attacks of contrastive learning. We propose contrastive poisoning, the first effective such attack on CL. We empirically show that contrastive poisoning, not only drastically reduces the performance of CL algorithms, but also attacks supervised learning models, making it the most generalizable indiscriminate poisoning attack. We also show that CL algorithms with a momentum encoder are more robust to indiscriminate poisoning, and propose a new countermeasure based on matrix completion.
翻译:不分皂白的数据中毒袭击对有监督的学习相当有效。然而,关于它们对未经监督的对比性学习(CL)的影响,人们知之甚少。本文是首先考虑有对比性学习的任意中毒袭击。我们提出了对比性中毒,这是首次对CL进行有效的此类袭击。我们从经验上表明,对比性中毒不仅极大地降低了CL算法的性能,而且袭击了有监督的学习模式,使其成为最普遍的、最普遍的、不分皂白的中毒袭击。我们还表明,带有动因编码器的CL算法对无区别性中毒更为有力,并提出了基于矩阵完成的新应对措施。