Closed-loop verification of cyber-physical systems with neural network controllers offers strong safety guarantees under certain assumptions. It is, however, difficult to determine whether these guarantees apply at run time because verification assumptions may be violated. To predict safety violations in a verified system, we propose a three-step framework for monitoring the confidence in verification assumptions. First, we represent the sufficient condition for verified safety with a propositional logical formula over assumptions. Second, we build calibrated confidence monitors that evaluate the probability that each assumption holds. Third, we obtain the confidence in the verification guarantees by composing the assumption monitors using a composition function suitable for the logical formula. Our framework provides theoretical bounds on the calibration and conservatism of compositional monitors. In two case studies, we demonstrate that the composed monitors improve over their constituents and successfully predict safety violations.
翻译:由神经网络控制器对网络物理系统进行闭路核查,在某些假设下提供了强有力的安全保障;然而,由于核查假设可能遭到违反,很难确定这些保障是否在运行时适用;为了预测核查系统中的安全违规情况,我们提议了一个三步框架,用于监测核查假设中的信任度。首先,我们代表了核查安全的充分条件,并提出了一个假设的假设逻辑公式。第二,我们建立了校准的信任监测器,评估每个假设的概率。第三,我们利用符合逻辑公式的构成功能组成假设监测器,从而获得对核查保障的信任。我们的框架为组成监测器的校准和保守提供了理论界限。在两个案例研究中,我们证明组成监测器的构成改善了其成分,并成功地预测了违反安全的情况。