Despite the large number of sophisticated deep neural network (DNN) verification algorithms, DNN verifier developers, users, and researchers still face several challenges. First, verifier developers must contend with the rapidly changing DNN field to support new DNN operations and property types. Second, verifier users have the burden of selecting a verifier input format to specify their problem. Due to the many input formats, this decision can greatly restrict the verifiers that a user may run. Finally, researchers face difficulties in re-using benchmarks to evaluate and compare verifiers, due to the large number of input formats required to run different verifiers. Existing benchmarks are rarely in formats supported by verifiers other than the one for which the benchmark was introduced. In this work we present DNNV, a framework for reducing the burden on DNN verifier researchers, developers, and users. DNNV standardizes input and output formats, includes a simple yet expressive DSL for specifying DNN properties, and provides powerful simplification and reduction operations to facilitate the application, development, and comparison of DNN verifiers. We show how DNNV increases the support of verifiers for existing benchmarks from 30% to 74%.
翻译:尽管有大量精密的深神经网络(DNN)核查算法,但DNN核查开发者、用户和研究人员仍面临若干挑战。首先,核查开发者必须与迅速变化的DNN字段竞争,以支持新的DNN操作和属性类型。第二,核查用户有责任选择核查者输入格式来说明问题。由于输入格式繁多,这项决定会大大限制用户可能运行的核查者。最后,由于管理不同核查者所需的投入格式数量众多,研究人员在重新使用基准来评估和比较核查者方面遇到困难。现有的基准很少采用由核查者支持的格式,而采用的基准则没有采用其他格式。在此工作中,我们提出了DNNV,这是减少DNNN核查者研究人员、开发者和用户负担的框架。DNNV将输入和产出格式标准化,包括一个简单但又清晰的DSL,用于指定DNN的属性,并提供强大的简化和减少操作,以便利DNN核查者的应用、开发和比较。我们表明DNNV如何将核查者对现有基准的支持从30%提高到74%。