The utilisation of Deep Learning (DL) raises new challenges regarding its dependability in critical applications. Sound verification and validation methods are needed to assure the safe and reliable use of DL. However, state-of-the-art debug testing methods on DL that aim at detecting adversarial examples (AEs) ignore the operational profile, which statistically depicts the software's future operational use. This may lead to very modest effectiveness on improving the software's delivered reliability, as the testing budget is likely to be wasted on detecting AEs that are unrealistic or encountered very rarely in real-life operation. In this paper, we first present the novel notion of "operational AEs" which are AEs that have relatively high chance to be seen in future operation. Then an initial design of a new DL testing method to efficiently detect "operational AEs" is provided, as well as some insights on our prospective research plan.
翻译:深层学习(DL)的运用在关键应用中的可靠性方面提出了新的挑战。为确保安全可靠地使用DL,需要健全的核查和验证方法。然而,DL的最新调试方法旨在检测对抗性实例,忽视了操作概况,后者在统计上描述了软件未来的操作用途。这可能导致提高软件交付可靠性方面效果不大,因为测试预算可能浪费在发现现实生活中不现实或很少遇到的AE上。本文首先介绍了“操作性AE”的新概念,即在未来操作中具有较高机会的AE。随后,提供了一种高效检测“操作性AE”的新DL测试方法的初步设计,以及一些关于我们未来研究计划的见解。