Neural Arithmetic Logic Modules have become a growing area of interest, though remain a niche field. These units are small neural networks which aim to achieve systematic generalisation in learning arithmetic operations such as {+, -, *, \} while also being interpretive in their weights. This paper is the first in discussing the current state of progress of this field, explaining key works, starting with the Neural Arithmetic Logic Unit (NALU). Focusing on the shortcomings of NALU, we provide an in-depth analysis to reason about design choices of recent units. A cross-comparison between units is made on experiment setups and findings, where we highlight inconsistencies in a fundamental experiment causing the inability to directly compare across papers. We finish by providing a novel discussion of existing applications for NALU and research directions requiring further exploration.
翻译:神经工程逻辑模块已经成为一个日益引人关注的领域,尽管仍然是一个特殊领域。这些单元是小型神经网络,目的是在学习算术操作(如 ⁇,-, *, ⁇, ⁇, ⁇ )时实现系统化的概括化,同时对其重量进行解释。本文件是讨论该领域目前进展状况的第一个文件,解释了关键工作,从神经工程测量组(NALU)开始。我们以NALU的缺点为重点,深入分析了最近单位的设计选择,以了解最近单位的设计选择。在实验设置和结果上,对各单位进行了交叉比较,我们着重介绍了导致无法直接比较各种文件的基本实验中的不一致之处。我们最后对NALU的现有应用和需要进一步探索的研究方向进行了新颖的讨论。