The approximation capability of ANNs and their RNN instantiations, is strongly correlated with the number of parameters packed into these networks. However, the complexity barrier for human understanding, is arguably related to the number of neurons and synapses in the networks, and to the associated nonlinear transformations. In this paper we show that the use of biophysical synapses, as found in LTCs, have two main benefits. First, they allow to pack more parameters for a given number of neurons and synapses. Second, they allow to formulate the nonlinear-network transformation, as a linear system with state-dependent coefficients. Both increase interpretability, as for a given task, they allow to learn a system linear in its input features, that is smaller in size compared to the state of the art. We substantiate the above claims on various time-series prediction tasks, but we believe that our results are applicable to any feedforward or recurrent ANN.
翻译:ANN及其RNN的近似能力与这些网络中包含的参数数量密切相关。然而,人类理解的复杂障碍与网络中的神经元和突触数量以及相关的非线性变异有关。在本文中,我们表明使用LTCs中发现的生物物理突触有两个主要好处。首先,它们允许为一定数量的神经元和突触收集更多的参数。第二,它们允许将非线性网络转换形成一个具有国家依附系数的线性系统。两者都增加了可解释性,就一项特定任务而言,它们允许在输入特征方面学习一个系统线性,其规模小于艺术状态。我们证实了上述关于各种时间序列预测任务的说法,但我们认为,我们的结果适用于任何饲料前或经常的ANN。</s>