Several areas have been improved with Deep Learning during the past years. For non-safety related products adoption of AI and ML is not an issue, whereas in safety critical applications, robustness of such approaches is still an issue. A common challenge for Deep Neural Networks (DNN) occur when exposed to out-of-distribution samples that are previously unseen, where DNNs can yield high confidence predictions despite no prior knowledge of the input. In this paper we analyse two supervisors on two well-known DNNs with varied setups of training and find that the outlier detection performance improves with the quality of the training procedure. We analyse the performance of the supervisor after each epoch during the training cycle, to investigate supervisor performance as the accuracy converges. Understanding the relationship between training results and supervisor performance is valuable to improve robustness of the model and indicates where more work has to be done to create generalized models for safety critical applications.
翻译:在过去几年里,通过深层学习改进了若干领域。对于非安全相关产品采用AI和ML并不是一个问题,而对于安全关键应用而言,这类方法的稳健性仍然是一个问题。当接触先前看不见的分布外样本时,深神经网络(DNN)面临共同的挑战,DNN可以产生高度信心预测,尽管事先对投入并不知情。在这份文件中,我们分析了两个有不同培训设置的知名DNN的两个知名DN的主管,发现外部检测性能随着培训程序的质量而提高。我们分析了培训周期中每个时期的主管的绩效,以调查其准确性。了解培训成果与主管业绩之间的关系对于提高模型的稳健性十分宝贵,并指出在创建安全关键应用的普遍模式方面还需要做更多的工作。