Closed-loop verification of cyber-physical systems with neural network controllers offers strong safety guarantees under certain assumptions. It is, however, difficult to determine whether these guarantees apply at run time because verification assumptions may be violated. To predict safety violations in a verified system, we propose a three-step confidence composition (CoCo) framework for monitoring verification assumptions. First, we represent the sufficient condition for verified safety with a propositional logical formula over assumptions. Second, we build calibrated confidence monitors that evaluate the probability that each assumption holds. Third, we obtain the confidence in the verification guarantees by composing the assumption monitors using a composition function suitable for the logical formula. Our CoCo framework provides theoretical bounds on the calibration and conservatism of compositional monitors. Two case studies show that compositional monitors are calibrated better than their constituents and successfully predict safety violations.
翻译:由神经网络控制器对网络物理系统进行闭路核查,在某些假设下提供了强有力的安全保障;然而,由于核查假设可能被违反,很难确定这些保障是否在运行时适用;为了预测核查系统中的安全违规情况,我们提议了一个监测核查假设的三步信任构成框架(CoCo)。首先,我们代表了核查安全的充分条件,提出了比假设更符合逻辑的假设公式。第二,我们建立了校准的信任监测器,评估了每个假设的概率。第三,我们通过使用适合逻辑公式的构成功能组成假设监测器,获得了对核查保证的信心。我们的CoCo框架为组成监测器的校准和保守提供了理论界限。两个案例研究表明,组成监测器的校准优于其成分,并成功地预测了违反安全的情况。