Forward invariance is a long-studied property in control theory that is used to certify that a dynamical system stays within some pre-specified set of states for all time, and also admits robustness guarantees (e.g., the certificate holds under perturbations). We propose a general framework for training and provably certifying robust forward invariance in Neural ODEs. We apply this framework in two settings: certified adversarial robustness for image classification, and certified safety in continuous control. Notably, our method empirically produces superior adversarial robustness guarantees compared to prior work on certifiably robust Neural ODEs (including implicit-depth models).
翻译:在控制理论中,前向变化是一个长期研究的财产,用于证明动态系统始终处于某些事先指定的状态内,并承认稳健性保障(例如证书处于扰动状态 ) 。 我们提议了一个培训和可证实神经内向变化状态的一般框架。 我们在两种情况下应用了这一框架:对图像分类进行经认证的对抗性强健性和连续控制中经认证的安全性。 值得注意的是,我们的方法在经验上比以前关于可证实的稳健神经观察状态的工作(包括隐含深度模型)产生了较强的对抗性强健性保障。