Deep neural networks have emerged as the workhorse for a large section of robotics and control applications, especially as models for dynamical systems. Such data-driven models are in turn used for designing and verifying autonomous systems. This is particularly useful in modeling medical systems where data can be leveraged to individualize treatment. In safety-critical applications, it is important that the data-driven model is conformant to established knowledge from the natural sciences. Such knowledge is often available or can often be distilled into a (possibly black-box) model $M$. For instance, the unicycle model (which encodes Newton's laws) for an F1 racing car. In this light, we consider the following problem - given a model $M$ and state transition dataset, we wish to best approximate the system model while being bounded distance away from $M$. We propose a method to guarantee this conformance. Our first step is to distill the dataset into few representative samples called memories, using the idea of a growing neural gas. Next, using these memories we partition the state space into disjoint subsets and compute bounds that should be respected by the neural network, when the input is drawn from a particular subset. This serves as a symbolic wrapper for guaranteed conformance. We argue theoretically that this only leads to bounded increase in approximation error; which can be controlled by increasing the number of memories. We experimentally show that on three case studies (Car Model, Drones, and Artificial Pancreas), our constrained neurosymbolic models conform to specified $M$ models (each encoding various constraints) with order-of-magnitude improvements compared to the augmented Lagrangian and vanilla training methods. Our code can be found at https://github.com/kaustubhsridhar/Constrained_Models
翻译:深心神经网络作为大量机器人和控制应用程序的工马出现, 特别是作为动态系统的模型。 这些数据驱动模型被转而用于设计和核查自主系统。 这在模拟医疗系统方面特别有用, 数据可以被利用到个性化处理。 在安全关键应用中, 数据驱动模型必须符合自然科学的既定知识。 这种知识经常存在, 或往往可以被蒸馏成一个( 可能的黑箱) 模型 $M$ 。 例如, F1 赛车的单周期模型( 编码牛顿定律 ) 。 在这样的情况下, 我们考虑下的问题 — 以模型 $ 和状态过渡数据集为基础, 我们希望在将系统模型与$ $ 相距遥远的同时, 我们建议一种方法来保证这种符合自然科学。 我们的第一步是将数据集蒸馏成少数有代表性的样本, 使用一种不断增长的神经气体模型。 下一步, 我们用这些记忆将空间分割成不相连接的子细胞/, 并且将这个模型的精确度绑定起来 。 我们的模型应该用来解释这个模型 。</s>