Neural Arithmetic Logic Modules have become a growing area of interest, though remain a niche field. These modules are neural networks which aim to achieve systematic generalisation in learning arithmetic and/or logic operations such as $\{+, -, \times, \div, \leq, \textrm{AND}\}$ while also being interpretable. This paper is the first in discussing the current state of progress of this field, explaining key works, starting with the Neural Arithmetic Logic Unit (NALU). Focusing on the shortcomings of the NALU, we provide an in-depth analysis to reason about design choices of recent modules. A cross-comparison between modules is made on experiment setups and findings, where we highlight inconsistencies in a fundamental experiment causing the inability to directly compare across papers. To alleviate the existing inconsistencies, we create a benchmark which compares all existing arithmetic NALMs. We finish by providing a novel discussion of existing applications for NALU and research directions requiring further exploration.
翻译:神经亚学逻辑模块已经成为一个日益引人关注的领域, 尽管仍然是一个特殊领域。 这些模块是神经网络, 目的是在学习计算和/或逻辑操作方面实现系统化的概括化, 例如 $, -, time, \div,\ div,\leq,\ textrm{AND}$, 同时也可以解释。 本文是讨论这个领域当前进展状况的第一个文件, 解释了关键工作, 从神经亚学逻辑股( NALU) 开始 。 侧重于NALU 的缺陷, 我们为最近模块的设计选择提供了深入的分析。 在实验设置和结果上对模块进行了交叉比较, 我们突出一个基本实验性实验性实验性实验性实验性实验的不一致之处, 使得无法直接比较各种文件。 为了缓解现有的不一致, 我们创建了一个基准, 比较所有现有的算术 NALMs 。 我们最后对NALU 的现有应用和需要进一步探索的研究方向进行了新颖的讨论 。