This report summarizes the 3rd International Verification of Neural Networks Competition (VNN-COMP 2022), held as a part of the 5th Workshop on Formal Methods for ML-Enabled Autonomous Systems (FoMLAS), which was collocated with the 34th International Conference on Computer-Aided Verification (CAV). VNN-COMP is held annually to facilitate the fair and objective comparison of state-of-the-art neural network verification tools, encourage the standardization of tool interfaces, and bring together the neural network verification community. To this end, standardized formats for networks (ONNX) and specification (VNN-LIB) were defined, tools were evaluated on equal-cost hardware (using an automatic evaluation pipeline based on AWS instances), and tool parameters were chosen by the participants before the final test sets were made public. In the 2022 iteration, 11 teams participated on a diverse set of 12 scored benchmarks. This report summarizes the rules, benchmarks, participating tools, results, and lessons learned from this iteration of this competition.
翻译:本报告概述了第三次神经网络竞争国际核查(VNNN-COMP 2022),这是作为第五次ML-Enable自治系统正式方法讲习班的一部分而举行的,该讲习班与第三十四届计算机辅助核查国际会议(CAV)同时举行。每年举行一次VNN-COMP会议,以便利对最新神经网络核查工具进行公正和客观的比较,鼓励工具界面标准化,并汇集神经网络核查界。为此,确定了网络标准化格式(ONNX)和规格(VNNN-LIB),对成本相等的硬件(使用基于AWS的自动评价管道)进行了评价,并在最后测试组公布之前由参与者选择了工具参数。在2022年会议上,11个小组参加了一套由12个得分组成的不同基准。本报告总结了这一竞争的规则、基准、参与工具、结果和从中吸取的经验教训。