The A* algorithm is commonly used to solve NP-hard combinatorial optimization problems. When provided with an accurate heuristic function, A* can solve such problems in time complexity that is polynomial in the solution depth. This fact implies that accurate heuristic approximation for many such problems is also NP-hard. In this context, we examine a line of recent publications that propose the use of deep neural networks for heuristic approximation. We assert that these works suffer from inherent scalability limitations since -- under the assumption that P$\ne$NP -- such approaches result in either (a) network sizes that scale exponentially in the instance sizes or (b) heuristic approximation accuracy that scales inversely with the instance sizes. Our claim is supported by experimental results for three representative NP-hard search problems that show that fitting deep neural networks accurately to heuristic functions necessitates network sizes that scale exponentially with the instance size.
翻译:A* 算法通常用于解决NP硬组合优化问题。当提供准确的超值功能时, A* 可以在时间复杂性中解决这类问题,这种时间复杂性在解决方案深度中是多元的。 这一事实意味着许多此类问题的准确超值近似值也是坚固的NP。 在这方面,我们检查了最近一系列出版物,这些出版物提议使用深神经网络来进行超值近似。 我们断言,这些工程受到内在的可缩放性限制,因为根据P$\ne$NP的假设,这类方法导致(a) 网络规模在实例大小中成倍缩放,或(b) 超值近似精确度与实例大小反比。我们的说法得到了三个具有代表性的NP硬性搜索问题的实验结果的支持,这些实验结果表明,与超值功能功能精确的深神经网络需要以指数缩放的网络规模。