This paper examines the robustness of deployed few-shot meta-learning systems when they are fed an imperceptibly perturbed few-shot dataset. We attack amortized meta-learners, which allows us to craft colluding sets of inputs that are tailored to fool the system's learning algorithm when used as training data. Jointly crafted adversarial inputs might be expected to synergistically manipulate a classifier, allowing for very strong data-poisoning attacks that would be hard to detect. We show that in a white box setting, these attacks are very successful and can cause the target model's predictions to become worse than chance. However, in opposition to the well-known transferability of adversarial examples in general, the colluding sets do not transfer well to different classifiers. We explore two hypotheses to explain this: 'overfitting' by the attack, and mismatch between the model on which the attack is generated and that to which the attack is transferred. Regardless of the mitigation strategies suggested by these hypotheses, the colluding inputs transfer no better than adversarial inputs that are generated independently in the usual way.
翻译:本文审视了在输入不明显受干扰的少发元数据集时所部署的微粒元学习系统的稳健性。 我们攻击了分解的元激光器, 这使得我们能够编织出一套串通的投入, 用来在将该系统用作培训数据时愚弄其学习算法。 联合设计的对立输入物可能会协同操控一个分类器, 从而使得很难检测到非常强大的数据渗透攻击。 我们在白箱设置中显示, 这些攻击非常成功, 并可能导致目标模型的预测比机会更糟。 但是, 与众所周知的对抗性例子的可转移性相反, 串通组合并不能很好地转移给不同的分类器。 我们探讨两种假设来解释这一点: “ 由攻击过度”, 以及攻击发生时的模型与攻击被转移到的模型之间的不匹配。 尽管这些假设提出了缓解战略, 协调的投入转移并不比通常独立生成的对抗性输入物好。