We study the problem of entity-relation extraction in the presence of symbolic domain knowledge. Such knowledge takes the form of an ontology defining relations and their permissible arguments. Previous approaches set out to integrate such knowledge in their learning approaches either through self-training, or through approximations that lose the precise meaning of the logical expressions. By contrast, our approach employs semantic loss which captures the precise meaning of a logical sentence through maintaining a probability distribution over all possible states, and guiding the model to solutions which minimize any constraint violations. With a focus on low-data regimes, we show that semantic loss outperforms the baselines by a wide margin.
翻译:我们研究在象征性领域知识面前的实体关系提取问题,这种知识的形式是本体界定关系及其允许的论据。以前的方法是通过自我培训或通过失去逻辑表达的准确含义的近似值将这种知识纳入其学习方法中。相反,我们的方法采用语义损失,通过在所有可能的国家维持概率分布,并指导尽量减少任何限制违反的解决方案模式,从而捕捉逻辑句的确切含义。我们以低数据制度为重点,显示语义损失大大超过基线。