Indiscriminate data poisoning attacks are quite effective against supervised learning. However, not much is known about their impact on unsupervised contrastive learning (CL). This paper is the first to consider indiscriminate poisoning attacks of contrastive learning. We propose Contrastive Poisoning (CP), the first effective such attack on CL. We empirically show that Contrastive Poisoning, not only drastically reduces the performance of CL algorithms, but also attacks supervised learning models, making it the most generalizable indiscriminate poisoning attack. We also show that CL algorithms with a momentum encoder are more robust to indiscriminate poisoning, and propose a new countermeasure based on matrix completion. Code is available at: https://github.com/kaiwenzha/contrastive-poisoning.
翻译:不分皂白的数据中毒袭击对有监督的学习相当有效,但是,关于它们对未经监督的对比性学习(CL)的影响,人们知之甚少。本文是首先考虑有对比性学习的任意中毒袭击。我们提议反毒毒(CP),这是对CL的第一次有效袭击。我们从经验上表明,反毒不仅极大地降低了CL算法的性能,而且袭击了受监督的学习模式,使其成为最普遍的无区别中毒袭击。我们还表明,具有动力编码器的CL算法对无区别中毒更为有力,并根据矩阵完成情况提出了新的应对措施。代码可在以下网址查阅:https://github.com/kaiwenzha/contrastatista-poisoning。</s>