In the last decade neural network have made huge impact both in industry and research due to their ability to extract meaningful features from imprecise or complex data, and by achieving super human performance in several domains. However, due to the lack of transparency the use of these networks is hampered in the areas with safety critical areas. In safety-critical areas, this is necessary by law. Recently several methods have been proposed to uncover this black box by providing interpreation of predictions made by these models. The paper focuses on time series analysis and benchmark several state-of-the-art attribution methods which compute explanations for convolutional classifiers. The presented experiments involve gradient-based and perturbation-based attribution methods. A detailed analysis shows that perturbation-based approaches are superior concerning the Sensitivity and occlusion game. These methods tend to produce explanations with higher continuity. Contrarily, the gradient-based techniques are superb in runtime and Infidelity. In addition, a validation the dependence of the methods on the trained model, feasible application domains, and individual characteristics is attached. The findings accentuate that choosing the best-suited attribution method is strongly correlated with the desired use case. Neither category of attribution methods nor a single approach has shown outstanding performance across all aspects.
翻译:过去十年来,神经神经网络由于能够从不精确或复杂的数据中提取有意义的特征,并实现若干领域的超人性能,对工业和研究产生了巨大影响;然而,由于缺乏透明度,这些网络的使用在安全关键领域受到阻碍;在安全关键领域,这是法律所必要的;最近提出了几种方法,通过对这些模型的预测进行解释来发现这一黑盒;本文件侧重于时间序列分析和基准数种最先进的归因方法,这些方法计算了对革命分类者的解释;提出的实验涉及基于梯度的和以扰动为基础的归因方法;详细分析表明,在敏感和封闭性游戏方面,以扰动为基础的方法优于敏感度和封闭性游戏;在安全关键领域,这是法律所必要的;最近提出了几种方法,通过对这些模型的预测作出解释来发现黑盒;相反,以梯度为基础的技术在运行和不灵敏度方面是超强的;此外,还附上了时间序列分析和基准数种最先进的归因方法对计算方法的依赖性模型、可行的应用领域和个人特性的根据。 研究结果表明,选择最佳归因方法的偏重于最佳归因方法,而不是理想的单一归属方法。