Decisions made by deep neural networks (DNNs) have a tremendous impact on the dependability of the systems that they are embedded into, which is of particular concern in the realm of safety-critical systems. In this paper we consider specification-based falsification of DNNs with the aim to support debugging and repair. We propose DeepOpt, a falsification technique based on black-box optimization, which generates counterexamples from a DNN in a refinement loop. DeepOpt can analyze input-output specifications, which makes it more general than falsification approaches that only support robustness specifications. The key idea is to algebraically combine the DNN with the input and output constraints derived from the specification. We have implemented DeepOpt and evaluated it on DNNs of varying sizes and architectures. Experimental comparisons demonstrate DeepOpt's precision and scalability; in particular, DeepOpt requires very few queries to the DNN.
翻译:深神经网络(DNNs)做出的决定对其嵌入的系统是否可靠产生巨大影响,这些系统在安全临界系统领域尤其令人关切。本文我们考虑基于规格的DNS伪造,目的是支持调试和修理。我们建议采用基于黑箱优化的伪造技术DeepOpt,这种技术在改进过程中产生来自DNN的反示例。DeepOpt可以分析输入输出规格,这比仅支持稳健规格的伪造方法更为笼统。关键思想是将DNNN与从规格中得出的输入和输出限制进行代数化结合。我们实施了DeepOpt,并在不同大小和结构的DNNS上对其进行了评估。实验性比较表明DeepOpt的精确性和可缩放性;特别是,DeepOpt需要很少向DNN提出查询。