Multi-instance learning (MIL) is a great paradigm for dealing with complex data and has achieved impressive achievements in a number of fields, including image classification, video anomaly detection, and far more. Each data sample is referred to as a bag containing several unlabeled instances, and the supervised information is only provided at the bag-level. The safety of MIL learners is concerning, though, as we can greatly fool them by introducing a few adversarial perturbations. This can be fatal in some cases, such as when users are unable to access desired images and criminals are attempting to trick surveillance cameras. In this paper, we design two adversarial perturbations to interpret the vulnerability of MIL methods. The first method can efficiently generate the bag-specific perturbation (called customized) with the aim of outsiding it from its original classification region. The second method builds on the first one by investigating the image-agnostic perturbation (called universal) that aims to affect all bags in a given data set and obtains some generalizability. We conduct various experiments to verify the performance of these two perturbations, and the results show that both of them can effectively fool MIL learners. We additionally propose a simple strategy to lessen the effects of adversarial perturbations. Source codes are available at https://github.com/InkiInki/MI-UAP.
翻译:多因果学习(MIL)是处理复杂数据的伟大范例,在一些领域取得了令人印象深刻的成就,包括图像分类、视频异常探测等。每个数据样本都被称为包含若干未贴标签实例的包包,只有袋级才提供受监督的信息。但是,MIL学生的安全是相关的,因为我们可以通过引入一些对抗性扰动来大大愚弄他们。这在某些情况下可能是致命的,例如用户无法获取所需图像,而犯罪分子正在试图欺骗监视摄像机。在本文中,我们设计了两个对称扰动器来解释MIL方法的脆弱性。第一种方法可以有效地生成具体包的扰动(所谓的定制的),目的是将它与其原先的分类区域隔开。第二种方法建立在第一个方法之上,即通过调查图像-突扰(所谓的普遍性)来影响特定数据集中的所有袋,并获得一些一般的可操作性。我们进行了各种实验,以核实这两套互扰动式/可操作性。我们设计了两个对称的对称干涉器,第一种方法可以有效地生成包式扰动(即定制) IMUL) 。我们向读者们展示了一种简单的源代码。