We consider the existence of fixed points of nonnegative neural networks, i.e., neural networks that take as an input and produce as an output nonnegative vectors. We first show that nonnegative neural networks with nonnegative weights and biases can be recognized as monotonic and (weakly) scalable functions within the framework of nonlinear Perron-Frobenius theory. This fact enables us to provide conditions for the existence of fixed points of nonnegative neural networks, and these conditions are weaker than those obtained recently using arguments in convex analysis. Furthermore, we prove that the shape of the fixed point set of nonnegative neural networks with nonnegative weights and biases is an interval, which under mild conditions degenerates to a point. These results are then used to obtain the existence of fixed points of more general types of nonnegative neural networks. The results of this paper contribute to the understanding of the behavior of autoencoders, and they provide insight into neural networks designed using the loop-unrolling technique, which can be seen as a fixed point searching algorithm. The chief theoretical results of this paper are verified in numerical simulations.
翻译:我们认为,在非线性 Perron-Frobenius 理论的框架内存在非阴性神经网络的固定点,即作为投入和作为产出的非阴性矢量生产的神经网络。我们首先表明,非阴性神经网络的固定点,具有非阴性重量和偏差,可被确认为单一和(微)可(微)伸缩功能。这一事实使我们能够为非线性 Perron-Frobenius 理论的框架内存在非阴性神经网络的固定点提供条件,而这些条件比最近利用 convex 分析的参数而获得的条件要弱。此外,我们证明非阴性神经网络的固定点组的形状是一个间隔,在温性条件下,这些结果被用来获得更一般的非阴性神经网络固定点的存在。本文的结果有助于了解非阴性神经网络的行为,并且它们提供了对使用循环反向技术设计的神经网络的洞察力,这在模拟的文件中可以视为一种固定的理论性算法。