Local structure such as context-specific independence (CSI) has received much attention in the probabilistic graphical model (PGM) literature, as it facilitates the modeling of large complex systems, as well as for reasoning with them. In this paper, we provide a new perspective on how to learn CSIs from data. We propose to first learn a functional and parameterized representation of a conditional probability table (CPT), such as a neural network. Next, we quantize this continuous function, into an arithmetic circuit representation that facilitates efficient inference. In the first step, we can leverage the many powerful tools that have been developed in the machine learning literature. In the second step, we exploit more recently-developed analytic tools from explainable AI, for the purposes of learning CSIs. Finally, we contrast our approach, empirically and conceptually, with more traditional variable-splitting approaches, that search for CSIs more explicitly.
翻译:概率图形模型(PGM)文献非常关注当地结构,如特定环境独立,因为它有利于大型复杂系统的建模,也有利于与这些系统进行推理。在本文中,我们对如何从数据中学习CSI提供了新的视角。我们提议首先学习一个功能性和参数化的有条件概率表(CPT),如神经网络。接下来,我们对这一连续的功能进行了量化,将其转化为一种算术电路代表,便于有效推断。在第一步,我们可以利用在机器学习文献中开发的许多强大工具。在第二步,我们从可解释的AI中利用最近开发的分析性工具,以学习CSI。最后,我们用更传统的多变式方法,用更明确的搜索CSI的方法,从经验上和概念上将我们的方法加以对比。