VIP内容

题目

可信任深度学习,44页ppt,PDE Based Trustworthy Deep Learning

关键字

可信任深度学习,深度学习可解释性,人工智能,深度学习安全

简介

人工智能的安全性和可信任性一直是人们关注的热点问题,尤其是深度学习的黑箱子技术,使得人工智能更加难以理解,所以了解深度学习的可信任性对人工智能的进一步普及至关重要。

作者

Stan Osher,Department of Mathematics, UCLA

成为VIP会员查看完整内容
0
26

最新论文

Saliency methods are used extensively to highlight the importance of input features in model predictions. These methods are mostly used in vision and language tasks, and their applications to time series data is relatively unexplored. In this paper, we set out to extensively compare the performance of various saliency-based interpretability methods across diverse neural architectures, including Recurrent Neural Network, Temporal Convolutional Networks, and Transformers in a new benchmark of synthetic time series data. We propose and report multiple metrics to empirically evaluate the performance of saliency methods for detecting feature importance over time using both precision (i.e., whether identified features contain meaningful signals) and recall (i.e., the number of features with signal identified as important). Through several experiments, we show that (i) in general, network architectures and saliency methods fail to reliably and accurately identify feature importance over time in time series data, (ii) this failure is mainly due to the conflation of time and feature domains, and (iii) the quality of saliency maps can be improved substantially by using our proposed two-step temporal saliency rescaling (TSR) approach that first calculates the importance of each time step before calculating the importance of each feature at a time step.

0
0
下载
预览
Top