We introduce a neural stack architecture, including a differentiable parametrized stack operator that approximates stack push and pop operations for suitable choices of parameters that explicitly represents a stack. We prove the stability of this stack architecture: after arbitrarily many stack operations, the state of the neural stack still closely resembles the state of the discrete stack. Using the neural stack with a recurrent neural network, we introduce a neural network Pushdown Automaton (nnPDA) and prove that nnPDA with finite/bounded neurons and time can simulate any PDA. Furthermore, we extend our construction and propose new architecture neural state Turing Machine (nnTM). We prove that differentiable nnTM with bounded neurons can simulate Turing Machine (TM) in real-time. Just like the neural stack, these architectures are also stable. Finally, we extend our construction to show that differentiable nnTM is equivalent to Universal Turing Machine (UTM) and can simulate any TM with only \textbf{seven finite/bounded precision} neurons. This work provides a new theoretical bound for the computational capability of bounded precision RNNs augmented with memory.
翻译:我们引入了一个神经堆叠结构, 包括一个不同且可变的 兼容性堆叠操作器, 以堆叠推推推和弹出操作来为明确代表堆叠的参数做出合适的选择。 我们证明这个堆叠结构的稳定性: 在任意的堆叠操作后, 神经堆堆的状态仍然与离散的堆叠状态相似。 使用神经堆和一个经常性神经网络的神经堆叠, 我们引入了一个神经网络推倒自动自动自动自动车( nnPDA), 并证明带有有限/ 限制神经元和时间的 NDA 可以模拟任何 PDA 。 此外, 我们扩展了我们的建筑, 并提出了新的建筑神经状态调控机器( nnTM) 。 我们证明, 与被捆绑的神经机器( TM) 能够实时模拟 。 正如神经堆叠一样, 这些结构也非常稳定 。 最后, 我们扩展我们的构造, 以显示不同的 NDM 相当于通用 Tuning 机器( Unial Turing mach ), 并且 只能模拟任何TM ( textf freef{ 7. ) 缩 精确 神经 。 。 。