Neural Algorithmic Reasoning is an emerging area of machine learning which seeks to infuse algorithmic computation in neural networks, typically by training neural models to approximate steps of classical algorithms. In this context, much of the current work has focused on learning reachability and shortest path graph algorithms, showing that joint learning on similar algorithms is beneficial for generalisation. However, when targeting more complex problems, such similar algorithms become more difficult to find. Here, we propose to learn algorithms by exploiting duality of the underlying algorithmic problem. Many algorithms solve optimisation problems. We demonstrate that simultaneously learning the dual definition of these optimisation problems in algorithmic learning allows for better learning and qualitatively better solutions. Specifically, we exploit the max-flow min-cut theorem to simultaneously learn these two algorithms over synthetically generated graphs, demonstrating the effectiveness of the proposed approach. We then validate the real-world utility of our dual algorithmic reasoner by deploying it on a challenging brain vessel classification task, which likely depends on the vessels' flow properties. We demonstrate a clear performance gain when using our model within such a context, and empirically show that learning the max-flow and min-cut algorithms together is critical for achieving such a result.
翻译:神经进化分析是机械学习的一个新兴领域,它试图在神经网络中引入算法计算,通常是通过培训神经模型来将古典算法的步骤相近。在这方面,目前许多工作侧重于学习可及性和最短路径图表算法,表明在类似算法上共同学习有助于概括化。然而,当针对更复杂的问题时,类似的算法就更难找到。在这里,我们提议通过利用内在算法问题的双重性来学习算法。许多算法解决优化问题。我们证明,同时学习这些选制问题的双重定义可以更好地学习和提供质量上更好的解决办法。具体地说,我们利用最大流的微剪图来同时学习这两种算法,以证明拟议方法的有效性。我们随后通过将它用于挑战性脑容器分类任务来验证我们双重算法理学师的真正效用,这可能取决于船只的流动特性。我们展示了在使用模型时取得明显的业绩收益,在这种背景下,我们利用最大流动的算法就是学习最大结果。