Flexible Transmitter Network (FTNet) is a recently proposed bio-plausible neural network and has achieved competitive performance with the state-of-the-art models when handling temporal-spatial data. However, there remains an open problem about the theoretical understanding of FTNet. This work investigates the theoretical properties of one-hidden-layer FTNet from the perspectives of approximation and local minima. Under mild assumptions, we show that: i) FTNet is a universal approximator; ii) the approximation complexity of FTNet can be exponentially smaller than those of real-valued neural networks with feedforward/recurrent architectures and is of the same order in the worst case; iii) any local minimum of FTNet is the global minimum, which suggests that it is possible for local search algorithms to converge to the global minimum. Our theoretical results indicate that FTNet can efficiently express target functions and has no concern about local minima, which complements the theoretical blank of FTNet and exhibits the possibility for ameliorating the FTNet.
翻译:移动式传送器网络(FTNet)是最近提议的一个生物可变性神经网络,在处理时间空间数据时,与最先进的模型取得了竞争性的性能。然而,对FTNet的理论理解仍然存在一个开放的问题。这项工作从近似和地方迷你的角度调查了单层FTNet的理论性质。在轻度假设下,我们显示:i)FTNet是一个通用的近距离相近器;ii)FTNet的近距离复杂性可以比具有饲料向前/经常结构的、具有最高价值的神经网络的近距离复杂性大得多,在最坏的情况下,其顺序也是一样的;iii)FTNet的任何局部最低限度是全球最低要求,这表明当地搜索算法有可能与全球最低要求趋同。我们的理论结果表明,FTNet能够有效地表达目标功能,对当地微型小范围不关心,这补充了FTNet的理论空白,并展示了对FTNet进行改造的可能性。