Neural Combinatorial Optimization attempts to learn good heuristics for solving a set of problems using Neural Network models and Reinforcement Learning. Recently, its good performance has encouraged many practitioners to develop neural architectures for a wide variety of combinatorial problems. However, the incorporation of such algorithms in the conventional optimization framework has raised many questions related to their performance and the experimental comparison with other methods such as exact algorithms, heuristics and metaheuristics. This paper presents a critical analysis on the incorporation of algorithms based on neural networks into the classical combinatorial optimization framework. Subsequently, a comprehensive study is carried out to analyse the fundamental aspects of such algorithms, including performance, transferability, computational cost and generalization to larger-sized instances. To that end, we select the Linear Ordering Problem as a case of study, an NP-hard problem, and develop a Neural Combinatorial Optimization model to optimize it. Finally, we discuss how the analysed aspects apply to a general learning framework, and suggest new directions for future work in the area of Neural Combinatorial Optimization algorithms.
翻译:利用神经网络模型和强化学习,试图学习良好的神经组合优化方法,以学习如何用神经网络模型和强化学习解决一系列问题。最近,它的优良表现鼓励许多从业者开发神经结构,以应对各种各样的组合问题。然而,将这种算法纳入常规优化框架引起了许多问题,涉及其性能,以及与精确算法、脉冲学和计量学等其他方法的实验性比较。本文件对基于神经网络的算法纳入古典组合优化框架的情况进行了批判性分析。随后,开展了一项全面研究,以分析这些算法的基本方面,包括性能、可转移性、计算成本和向较大实例的概括化。为此,我们选择线性定问题作为研究的一个实例,即NP-硬问题,并开发一个能优化它的神经组合组合测试模型。最后,我们讨论了分析的方面如何适用于一个总体学习框架,并就神经组合优化法领域未来工作的新方向提出建议。