Deep neural networks have emerged as the workhorse for a large section of robotics and control applications, especially as models for dynamical systems. Such data-driven models are in turn used for designing and verifying autonomous systems. This is particularly useful in modeling medical systems where data can be leveraged to individualize treatment. In safety-critical applications, it is important that the data-driven model is conformant to established knowledge from the natural sciences. Such knowledge is often available or can often be distilled into a (possibly black-box) model $M$. For instance, the unicycle model for an F1 racing car. In this light, we consider the following problem - given a model $M$ and state transition dataset, we wish to best approximate the system model while being bounded distance away from $M$. We propose a method to guarantee this conformance. Our first step is to distill the dataset into few representative samples called memories, using the idea of a growing neural gas. Next, using these memories we partition the state space into disjoint subsets and compute bounds that should be respected by the neural network, when the input is drawn from a particular subset. This serves as a symbolic wrapper for guaranteed conformance. We argue theoretically that this only leads to bounded increase in approximation error; which can be controlled by increasing the number of memories. We experimentally show that on three case studies (Car Model, Drones, and Artificial Pancreas), our constrained neurosymbolic models conform to specified $M$ models (each encoding various constraints) with order-of-magnitude improvements compared to the augmented Lagrangian and vanilla training methods.
翻译:深心神经网络已经作为大量机器人和控制应用的工马出现, 特别是作为动态系统的模型。 这种数据驱动模型被转而用于设计和核查自主系统。 这对于模拟医疗系统特别有用, 数据可以用来进行个性化处理。 在安全关键应用中, 数据驱动模型必须符合自然科学的既定知识。 这种知识经常存在, 或往往可以蒸馏成一个( 可能的黑箱) 模型 $M 。 例如, F1 赛车的单周期模型。 从这个角度看, 我们考虑下一个问题 - 给一个模型$M$和状态过渡数据集。 我们希望在模拟医疗系统模型中最接近系统模型, 并且将数据与美元进行个性化处理。 在安全关键应用不断增长的神经气体的理念, 将数据集成几个有代表性的样本。 下一步, 我们利用这些记忆将空间分成一个不连续的元子集, 并配置一个应该受到神经系统常规网络尊重的单循环模型。 当一个稳定的模型显示一个稳定的模型时, 我们只能用一个稳定的模型来显示一个固定的精确的序列。