While convolutional neural networks (CNNs) have found wide adoption as state-of-the-art models for image-related tasks, their predictions are often highly sensitive to small input perturbations, which the human vision is robust against. This paper presents Perturber, a web-based application that allows users to instantaneously explore how CNN activations and predictions evolve when a 3D input scene is interactively perturbed. Perturber offers a large variety of scene modifications, such as camera controls, lighting and shading effects, background modifications, object morphing, as well as adversarial attacks, to facilitate the discovery of potential vulnerabilities. Fine-tuned model versions can be directly compared for qualitative evaluation of their robustness. Case studies with machine learning experts have shown that Perturber helps users to quickly generate hypotheses about model vulnerabilities and to qualitatively compare model behavior. Using quantitative analyses, we could replicate users' insights with other CNN architectures and input images, yielding new insights about the vulnerability of adversarially trained models.
翻译:虽然共生神经网络(CNNs)被广泛采用为与图像有关的任务的最先进的模型,但其预测往往对小型输入扰动非常敏感,而人类的视觉是很强的。本文展示了Perturber,这是一个基于网络的应用程序,使用户能够瞬间探索当3D输入场被交互干扰时CNN启动和预测是如何演变的。 Perturber提供了各种各样的场景修改,如相机控制、照明和阴影效应、背景修改、对象变形以及对抗性攻击,以方便发现潜在的弱点。微调模型版本可以直接比较,以定性地评估其强度。与机器学习专家的案例研究表明,Perturber帮助用户快速生成模型脆弱性的假象,并定性地比较模型行为。通过定量分析,我们可以将用户的洞见与其他CNN架构和输入图像相复制,从而产生关于对抗性训练模型的脆弱性的新洞察力。