Continuous engineering of autonomous driving functions commonly requires deploying vehicles in road testing to obtain inputs that cause problematic decisions. Although the discovery leads to producing an improved system, it also challenges the foundation of testing using equivalence classes and the associated relative test coverage criterion. In this paper, we propose believed equivalence, where the establishment of an equivalence class is initially based on expert belief and is subject to a set of available test cases having a consistent valuation. Upon a newly encountered test case that breaks the consistency, one may need to refine the established categorization in order to split the originally believed equivalence into two. Finally, we focus on modules implemented using deep neural networks where every category partitions an input over the real domain. We present both analytical and lazy methods to suggest the refinement. The concept is demonstrated in analyzing multiple autonomous driving modules, indicating the potential of our proposed approach.
翻译:自主驾驶功能的连续工程通常需要在道路测试中部署车辆,以获得造成问题决定的投入。虽然发现的结果导致改进了系统,但同时也挑战使用等同类和相关相对测试范围标准进行测试的基础。在本文件中,我们提出相信等同,因为建立等同类最初是基于专家的信念,并须有一套具有一致价值的现有测试案例。在新遇到的打破一致性的测试案例中,可能需要完善既定的分类方法,以便将原先认为的等同分成两个部分。最后,我们侧重于使用深神经网络执行的模块,其中每个类别都将一个投入分割到实际领域。我们提出了分析方法和懒惰方法,以提出改进建议。这个概念在分析多个自主驾驶模块时得到体现,说明我们拟议方法的潜力。