Deep neural networks (DNNs) have demonstrated superior performance over classical machine learning to support many features in safety-critical systems. Although DNNs are now widely used in such systems (e.g., self driving cars), there is limited progress regarding automated support for functional safety analysis in DNN-based systems. For example, the identification of root causes of errors, to enable both risk analysis and DNN retraining, remains an open problem. In this paper, we propose SAFE, a black-box approach to automatically characterize the root causes of DNN errors. SAFE relies on a transfer learning model pre-trained on ImageNet to extract the features from error-inducing images. It then applies a density-based clustering algorithm to detect arbitrary shaped clusters of images modeling plausible causes of error. Last, clusters are used to effectively retrain and improve the DNN. The black-box nature of SAFE is motivated by our objective not to require changes or even access to the DNN internals to facilitate adoption. Experimental results show the superior ability of SAFE in identifying different root causes of DNN errors based on case studies in the automotive domain. It also yields significant improvements in DNN accuracy after retraining, while saving significant execution time and memory when compared to alternatives.
翻译:深心神经网络(DNN)比经典机器学习表现优于经典机器学习,以支持安全临界系统中的许多特征。虽然DNN现在在这类系统中被广泛使用(如自驾驶汽车),但在DNN系统对功能安全分析的自动支持方面进展有限。例如,查明错误的根源,以便能够进行风险分析和DNN再培训,仍然是一个尚未解决的问题。在本文中,我们提议采用“安全”方法,即黑箱方法,自动确定DNN错误的根源。SAFE依靠在图像网络上预先培训的传输学习模型,从错误生成图像中提取这些特征。然后采用基于密度的集群算法,以探测任意形成的图像群群,模拟出合理的误差原因。最后,集群用于有效地重组和改进DNNN。安全网的黑箱性质是由我们的目标所驱动的,即不要求改变甚至进入DNNN内部以方便采用。实验结果显示,根据在汽车领域进行案例研究后,安全网在确定DNN错误的不同根源方面拥有很强的能力。在进行重大的记忆再培训之后,在相当精确性的情况下,在保存DNNN进行重大的记忆中也取得了重大改进。