We design a predictive layer for structured-output prediction (SOP) that can be plugged into any neural network guaranteeing its predictions are consistent with a set of predefined symbolic constraints. Our Semantic Probabilistic Layer (SPL) can model intricate correlations, and hard constraints, over a structured output space all while being amenable to end-to-end learning via maximum likelihood. SPLs combine exact probabilistic inference with logical reasoning in a clean and modular way, learning complex distributions and restricting their support to solutions of the constraint. As such, they can faithfully, and efficiently, model complex SOP tasks beyond the reach of alternative neuro-symbolic approaches. We empirically demonstrate that SPLs outperform these competitors in terms of accuracy on challenging SOP tasks including hierarchical multi-label classification, pathfinding and preference learning, while retaining perfect constraint satisfaction.
翻译:我们设计了一个可用于结构性产出预测的预测层(SOP ), 可以插入任何神经网络,保证其预测符合一系列预先定义的象征性限制。 我们的语义概率层(SPL)可以建模一个结构化输出空间,同时最有可能地进行端到端的学习。 SPL将精确的概率推理与逻辑推理、清洁和模块化方式相结合,学习复杂的分布,并限制其对制约解决方案的支持。 因此,他们可以忠实和高效地模拟替代神经-同步方法无法达到的复杂 SOP任务。 我们从经验上证明,SPL在挑战性 SOP任务(包括等级多标签分类、路由探索和偏好学习)的准确性方面优于这些竞争者,同时保持完全的制约性满意度。