Recent work has shown logical background knowledge can be used in learning systems to compensate for a lack of labeled training data. Many methods work by creating a loss function that encodes this knowledge. However, often the logic is discarded after training, even if it is still useful at test time. Instead, we ensure neural network predictions satisfy the knowledge by refining the predictions with an extra computation step. We introduce differentiable refinement functions that find a corrected prediction close to the original prediction. We study how to effectively and efficiently compute these refinement functions. Using a new algorithm called Iterative Local Refinement (ILR), we combine refinement functions to find refined predictions for logical formulas of any complexity. ILR finds refinements on complex SAT formulas in significantly fewer iterations and frequently finds solutions where gradient descent can not. Finally, ILR produces competitive results in the MNIST addition task.
翻译:最近的工作表明,逻辑背景知识可用于学习系统,以弥补缺乏标签培训数据的情况。许多方法都是通过创建一种将这种知识编码为编码的丢失功能来开展工作。然而,逻辑往往在培训后被抛弃,即使测试时间仍然有用。相反,我们通过用额外的计算步骤来改进预测,确保神经网络预测满足知识。我们引入了不同的改进功能,找到接近原始预测的更正预测。我们研究如何有效和高效地计算这些改进功能。我们使用一种称为“迭代本地精密”的新算法,我们结合了精细化功能,以找到任何复杂逻辑公式的精细化预测。ILR发现复杂的SAT公式的精细化用极小的迭代数,经常在梯度下降无法找到解决办法。最后,ILR在MISC的附加任务中产生竞争性结果。