Deep Neural Networks lead the state of the art of computer vision tasks. Despite this, Neural Networks are brittle in that small changes in the input can drastically affect their prediction outcome and confidence. Consequently and naturally, research in this area mainly focus on adversarial attacks and defenses. In this paper, we take an alternative stance and introduce the concept of Assistive Signals, which are optimized to improve a model's confidence score regardless if it's under attack or not. We analyse some interesting properties of these assistive perturbations and extend the idea to optimize assistive signals in the 3D space for real-life scenarios simulating different lighting conditions and viewing angles. Experimental evaluations show that the assistive signals generated by our optimization method increase the accuracy and confidence of deep models more than those generated by conventional methods that work in the 2D space. In addition, our Assistive Signals illustrate the intrinsic bias of ML models towards certain patterns in real-life objects. We discuss how we can exploit these insights to re-think, or avoid, some patterns that might contribute to, or degrade, the detectability of objects in the real-world.
翻译:深神经网络领导着计算机视觉任务。 尽管如此, 神经网络仍然处于最先进的状态。 尽管如此, 神经网络之所以萎缩, 是因为输入的微小变化会极大地影响其预测结果和信心。 因此自然地, 这一领域的研究主要侧重于对抗性攻击和防御。 在本文中, 我们采取另一种立场并引入辅助信号概念, 优化这些概念是为了提高模型的可信度, 不论是否受到攻击。 我们分析这些辅助性扰动的一些有趣的特性, 并推广在 3D 空间优化辅助信号的想法, 以模拟模拟不同照明条件和观察角度的真实生活情景。 实验性评估显示, 我们优化方法产生的辅助信号比在 2D 空间工作的常规方法产生的更能提高深度模型的准确性和信心。 此外, 我们的辅助信号展示了ML 模型对于现实物体的某些模式的内在偏向。 我们讨论如何利用这些洞察力来重新思考, 或者避免某些模式, 有助于或者降低现实世界中物体的可探测性。