Computational learning theory states that many classes of boolean formulas are learnable in polynomial time. This paper addresses the understudied subject of how, in practice, such formulas can be learned by deep neural networks. Specifically, we analyze boolean formulas associated with model-sampling benchmarks, combinatorial optimization problems, and random 3-CNFs with varying degrees of constrainedness. Our experiments indicate that: (i) neural learning generalizes better than pure rule-based systems and pure symbolic approach; (ii) relatively small and shallow neural networks are very good approximators of formulas associated with combinatorial optimization problems; (iii) smaller formulas seem harder to learn, possibly due to the fewer positive (satisfying) examples available; and (iv) interestingly, underconstrained 3-CNF formulas are more challenging to learn than overconstrained ones. Such findings pave the way for a better understanding, construction, and use of interpretable neurosymbolic AI methods.
翻译:计算性学习理论指出,许多布尔语公式的类别在多元时间可以学习。本文论述研究不足的课题,即如何在实践中通过深神经网络来学习这些公式。具体地说,我们分析与模型抽样基准、组合优化问题和限制程度不同的随机3-CNF相关的布尔语公式。我们的实验表明:(一)神经学习比纯粹的基于规则的系统和纯象征性方法更普遍;(二)相对小型和浅神经网络是与组合优化问题相关的公式的非常接近的; (三)小的公式似乎难以学习,原因可能是现有的正面(满意)实例较少;以及(四)有趣的是,受控制不足的3-CNF公式比受过度限制的3-CNF公式更难学习。这些发现为更好地理解、构建和使用可解释的神经特征的AI方法铺平了道路。