Embedding methods which enforce a partial order or lattice structure over the concept space, such as Order Embeddings (OE) (Vendrov et al., 2016), are a natural way to model transitive relational data (e.g. entailment graphs). However, OE learns a deterministic knowledge base, limiting expressiveness of queries and the ability to use uncertainty for both prediction and learning (e.g. learning from expectations). Probabilistic extensions of OE (Lai and Hockenmaier, 2017) have provided the ability to somewhat calibrate these denotational probabilities while retaining the consistency and inductive bias of ordered models, but lack the ability to model the negative correlations found in real-world knowledge. In this work we show that a broad class of models that assign probability measures to OE can never capture negative correlation, which motivates our construction of a novel box lattice and accompanying probability measure to capture anticorrelation and even disjoint concepts, while still providing the benefits of probabilistic modeling, such as the ability to perform rich joint and conditional queries over arbitrary sets of concepts, and both learning from and predicting calibrated uncertainty. We show improvements over previous approaches in modeling the Flickr and WordNet entailment graphs, and investigate the power of the model.
翻译:在概念空间上实施部分顺序或固定结构的嵌入方法,如“嵌入命令”(OE)(Vendrov等人,2016年),是模拟中转关系数据的一种自然方式(例如,包含图形)。然而,OE学会了一种确定性的知识基础,限制了查询的表达性和将不确定性用于预测和学习(例如,从期望中学习)的能力;OE的概率扩展(Lai和Hockenmaier,2017年)提供了某种程度校正这些记性概率的能力,同时保持了定购模型的一致性和演化偏差,但缺乏模拟在现实世界知识中发现的负相关性的能力。在这项工作中,我们表明将概率计量方法分配给OE的广泛模型种类永远无法捕捉负相关因素,这促使我们构建一个新型的盒子,并伴有概率计量,以捕捉反通缩、甚至脱节概念,同时仍然提供比较性模型的好处,例如能够执行丰富的联合和有条件的模型,同时无法模拟真实性地测量真实性,我们需要对先前的模型进行精确度的改进,从而对方向进行任意地了解。