Neural network models generally involve two important components, i.e., network architecture and neuron model. Although there are abundant studies about network architectures, only a few neuron models have been developed, such as the MP neuron model developed in 1943 and the spiking neuron model developed in the 1950s. Recently, a new bio-plausible neuron model, Flexible Transmitter (FT) model, has been proposed. It exhibits promising behaviors, particularly on temporal-spatial signals, even when simply embedded into the common feedforward network architecture. This paper attempts to understand the properties of the FT network (FTNet) theoretically. Under mild assumptions, we show that: i) FTNet is a universal approximator; ii) the approximation complexity of FTNet can be exponentially smaller than those of commonly-used real-valued neural networks with feedforward/recurrent architectures and is of the same order in the worst case; iii) any local minimum of FTNet is the global minimum, implying that it is possible to identify global minima by local search algorithms.
翻译:神经网络模型通常涉及两个重要组成部分,即网络架构和神经元模型。虽然对网络架构进行了大量研究,但只开发了少数神经模型,如1943年开发的MP神经模型和1950年代开发的spinking神经模型。最近,提出了一个新的生物可变性神经模型,即弹性传输器模型。它显示了有希望的行为,特别是在时间空间信号上,即使只是嵌入共同的向前网络结构。本文试图从理论上理解FT网络(FTNet)的特性。根据一些轻度假设,我们表明:(1)FTNet是一种通用的近似相光学模型;(2)FTNet的近似复杂性可能大大小于具有向上/经常结构的常用实际价值的神经网络,在最坏的情况下,它也具有同样的顺序;(3)FTNet的任何最低限度是全球最低的当地标准,这意味着可以通过当地搜索算法识别全球微型模型。