The method recently introduced in arXiv:2011.10115 realizes a deep neural network with just a single nonlinear element and delayed feedback. It is applicable for the description of physically implemented neural networks. In this work, we present an infinite-dimensional generalization, which allows for a more rigorous mathematical analysis and a higher flexibility in choosing the weight functions. Precisely speaking, the weights are described by Lebesgue integrable functions instead of step functions. We also provide a functional back-propagation algorithm, which enables gradient descent training of the weights. In addition, with a slight modification, our concept realizes recurrent neural networks.
翻译:最近在 arXiv: 2011.10115 中引入的方法实现了一个只有单一非线性元素和延迟反馈的深神经网络。 它适用于对实际实施的神经网络的描述。 在这项工作中, 我们提出了一个无限的多维概括, 允许进行更严格的数学分析, 并在选择重量函数时有更大的灵活性。 确切地说, 权重由 Lebesgue 不可调试的函数描述, 而不是由职级函数描述。 我们还提供了功能性反向分析算法, 从而可以对重量进行梯度下移训练。 此外, 我们的概念在稍作修改后, 实现了经常性神经网络 。