Deep neural networks are the default choice of learning models for computer vision tasks. Extensive work has been carried out in recent years on explaining deep models for vision tasks such as classification. However, recent work has shown that it is possible for these models to produce substantially different attribution maps even when two very similar images are given to the network, raising serious questions about trustworthiness. To address this issue, we propose a robust attribution training strategy to improve attributional robustness of deep neural networks. Our method carefully analyzes the requirements for attributional robustness and introduces two new regularizers that preserve a model's attribution map during attacks. Our method surpasses state-of-the-art attributional robustness methods by a margin of approximately 3% to 9% in terms of attribution robustness measures on several datasets including MNIST, FMNIST, Flower and GTSRB.
翻译:深神经网络是计算机视觉任务的默认学习模式的默认选择。近年来,在解释诸如分类等远视任务深视模式方面开展了广泛的工作。然而,最近的工作表明,即使向网络提供了两幅非常相似的图像,这些模型也有可能产生截然不同的归属图,这引起了关于可靠性的严重问题。为了解决这一问题,我们提出了一个强有力的归属培训战略,以提高深神经网络的归属性强度。我们的方法仔细分析了归属性强度的要求,并引入了两个新的规范,在袭击中保存了模型的归属图。我们的方法在包括MNIS、FMNIST、Flower和GTSRB在内的若干数据集的归属稳健度措施方面超过了最先进的归属性强度方法,其幅度约为3%至9%。