We demonstrate the first Recurrent Neural Network architecture for learning Signal Temporal Logic formulas, and present the first systematic comparison of formula inference methods. Legacy systems embed much expert knowledge which is not explicitly formalized. There is great interest in learning formal specifications that characterize the ideal behavior of such systems -- that is, formulas in temporal logic that are satisfied by the system's output signals. Such specifications can be used to better understand the system's behavior and improve design of its next iteration. Previous inference methods either assumed certain formula templates, or did a heuristic enumeration of all possible templates. This work proposes a neural network architecture that infers the formula structure via gradient descent, eliminating the need for imposing any specific templates. It combines learning of formula structure and parameters in one optimization. Through systematic comparison, we demonstrate that this method achieves similar or better mis-classification rates (MCR) than enumerative and lattice methods. We also observe that different formulas can achieve similar MCR, empirically demonstrating the under-determinism of the problem of temporal logic inference.
翻译:我们展示了第一个用于学习信号时空逻辑公式的经常性神经网络架构, 并对公式推断方法进行了首次系统比较。 遗留系统包含了许多没有明确正式化的专家知识。 人们非常有兴趣学习这些系统理想行为特征的正式规格, 即符合系统输出信号的时空逻辑公式。 这些规格可用于更好地了解系统的行为, 并改进下一次循环的设计。 先前的推断方法要么假设了某些公式模板, 要么对所有可能的模板进行了粗略的罗列。 这项工作提出了一个神经网络架构, 通过梯度下降推断公式结构, 消除强加任何特定模板的必要性。 它将公式结构和参数的学习结合起来, 一次优化。 我们通过系统比较, 证明该方法的错误分类率(MCR) 与数字和拉蒂斯方法相近或更好。 我们还观察到, 不同的公式可以实现相似的 MCR, 实证地展示了时间逻辑推论问题的定义下。