We consider the problem of explaining the temporal behavior of black-box systems using human-interpretable models. To this end, based on recent research trends, we rely on the fundamental yet interpretable models of deterministic finite automata (DFAs) and linear temporal logic (LTL) formulas. In contrast to most existing works for learning DFAs and LTL formulas, we rely on only positive examples. Our motivation is that negative examples are generally difficult to observe, in particular, from black-box systems. To learn meaningful models from positive examples only, we design algorithms that rely on conciseness and language minimality of models as regularizers. To this end, our algorithms adopt two approaches: a symbolic and a counterexample-guided one. While the symbolic approach exploits an efficient encoding of language minimality as a constraint satisfaction problem, the counterexample-guided one relies on generating suitable negative examples to prune the search. Both the approaches provide us with effective algorithms with theoretical guarantees on the learned models. To assess the effectiveness of our algorithms, we evaluate all of them on synthetic data.
翻译:我们考虑的是使用人类解释模型解释黑盒系统的时间行为问题。 为此,我们根据最近的研究趋势,依靠确定性有限自动数据(DFAs)和线性时间逻辑(LTL)公式的基本但可解释的模式。与大多数现有的学习DFAs和LTL公式的工作相比,我们只依靠正面的例子。我们的动机是,负面的例子一般难以观察到,特别是黑盒系统的负面例子。仅仅从正面的例子中学习有意义的模型,我们设计依靠模型简洁和语言最起码的正规化模型的算法。为此,我们的算法采用两种方法:象征性和反示例制导法。虽然象征性的方法利用语言最低程度的有效编码作为制约满意度问题,但反实例制导法则依赖于产生适当的负面实例来进行搜索。这两种方法都为我们提供了有效的算法,对所学模型提供理论上的保证。为了评估我们的算法的有效性,我们评估所有这些算法的合成数据。</s>