We introduce Joint Probability Trees (JPT), a novel approach that makes learning of and reasoning about joint probability distributions tractable for practical applications. JPTs support both symbolic and subsymbolic variables in a single hybrid model, and they do not rely on prior knowledge about variable dependencies or families of distributions. JPT representations build on tree structures that partition the problem space into relevant subregions that are elicited from the training data instead of postulating a rigid dependency model prior to learning. Learning and reasoning scale linearly in JPTs, and the tree structure allows white-box reasoning about any posterior probability $P(Q|E)$, such that interpretable explanations can be provided for any inference result. Our experiments showcase the practical applicability of JPTs in high-dimensional heterogeneous probability spaces with millions of training samples, making it a promising alternative to classic probabilistic graphical models.
翻译:我们引入了联合概率树(JPT),这是一种新颖的方法,它使关于联合概率分布的学习和推理能够用于实际应用。 JPT支持单一混合模型中的象征性和亚符号变量,它们并不依赖对不同依赖性或分布型系的先前知识。 JPT的表述方式建立在将问题空间分割到从培训数据中引出的相关次区域的树结构之上,而不是在学习之前假设僵硬的依赖模式。JPT的学习和推理尺度线性,而树结构允许白箱推理任何后生概率$P( ⁇ E),因此可以对任何推断结果提供可解释的解释性解释性解释性解释性解释。我们的实验展示了JPT在高度多变性概率空间的实际适用性,有数百万个培训样本,从而成为典型的概率图形模型的有希望的替代方法。