Propositional model counting, or #SAT, is the problem of computing the number of satisfying assignments of a Boolean formula. Many problems from different application areas, including many discrete probabilistic inference problems, can be translated into model counting problems to be solved by #SAT solvers. Exact #SAT solvers, however, are often not scalable to industrial size instances. In this paper, we present Neuro#, an approach for learning branching heuristics to improve the performance of exact #SAT solvers on instances from a given family of problems. We experimentally show that our method reduces the step count on similarly distributed held-out instances and generalizes to much larger instances from the same problem family. It is able to achieve these results on a number of different problem families having very different structures. In addition to step count improvements, Neuro# can also achieve orders of magnitude wall-clock speedups over the vanilla solver on larger instances in some problem families, despite the runtime overhead of querying the model.
翻译:假设模型计算, 即# SAT, 是计算满足布林公式分配数量的问题。 来自不同应用区的许多问题, 包括许多离散的概率推论问题, 可以转化成由# SAT 解答者解决的模型计算问题。 精确的 # SAT 解答器通常无法与工业规模相比缩放。 在本文中, 我们介绍 Neuro #, 这是一种学习分支体积的方法, 用来学习分支体积, 来改善来自特定家庭的问题实例的准确的 # SAT 解答器的性能。 我们实验性地显示, 我们的方法可以减少类似分布式的悬浮点数, 并且可以将同一问题家族的更多实例概括化为普通化。 它可以在结构非常不同的不同问题家庭上取得这些结果。 除了一步计数改进外, Neuro# 还可以在某些问题家庭, 大情况下在香草解决问题的案例中, 实现规模的倒数钟速度速度超前的超标。