This thesis focuses on the advancement of probabilistic logic programming (PLP), which combines probability theory for uncertainty and logic programming for relations. The thesis aims to extend PLP to support both discrete and continuous random variables, which is necessary for applications with numeric data. The first contribution is the introduction of context-specific likelihood weighting (CS-LW), a new sampling algorithm that exploits context-specific independencies for computational gains. Next, a new hybrid PLP, DC#, is introduced, which integrates the syntax of Distributional Clauses with Bayesian logic programs and represents three types of independencies: i) conditional independencies (CIs) modeled in Bayesian networks; ii) context-specific independencies (CSIs) represented by logical rules, and iii) independencies amongst attributes of related objects in relational models expressed by combining rules. The scalable inference algorithm FO-CS-LW is introduced for DC#. Finally, the thesis addresses the lack of approaches for learning hybrid PLP from relational data and background knowledge with the introduction of DiceML, which learns the structure and parameters of hybrid PLP and tackles the relational autocompletion problem. The conclusion discusses future directions and open challenges for hybrid PLP.
翻译:本论文侧重于概率逻辑程序(PLP)的进步,它结合了不确定性的概率理论和关系逻辑程序的逻辑程序。本论文旨在扩展PLP,以支持数字数据应用所需的离散和连续随机变量,这是数字数据应用所必需的。第一种贡献是采用因具体情况而异的可能性加权法(CS-LW),这是一种新的抽样算法,利用因具体情况而异的对计算收益的依赖性。接着,引入了新的混合PLP(DC#),将分配条款的合成税与巴伊西亚逻辑程序相结合,并代表三种依赖性:i)在巴伊西亚网络中建模的有条件的不依赖性(CI);ii)以逻辑规则为代表的因具体情况而异的不依赖性(CS-LIs),以及iii) 结合规则表达的相关模型中相关对象的属性之间的不依赖性。为DC#引入了可缩放的FO-CS-LW算法。最后,该论文解决了缺乏学习混合PLP-P关系结构、从关系引入数据的未来方向和背景知识,并讨论了MLMLML数据和ML结论。