We consider the existence of fixed points of nonnegative neural networks, i.e., neural networks that take as an input nonnegative vectors and process them using nonnegative parameters. We first show that nonnegative neural networks can be recognized as monotonic and (weakly) scalable functions within the framework of nonlinear Perron-Frobenius theory. This fact enables us to provide conditions for the existence of fixed points of nonnegative neural networks, and these conditions are weaker than those obtained recently using arguments in convex analysis. Furthermore, we prove that the shape of the fixed point set of nonnegative neural networks is often an interval, which degenerates to a point for the case of scalable networks. The results of this paper contribute to the understanding of the behavior of autoencoders, because the fixed point set of an autoencoder is precisely the set of points that can be perfectly reconstructed. Moreover, they provide insight into neural networks designed using the loop-unrolling technique, which can be seen as a fixed point searching algorithm. The chief theoretical results of this paper are verified in numerical simulations, where we consider an autoencoder that first compresses angular power spectra in massive MIMO systems, and, second, reconstruct the input spectra from the compressed signals.
翻译:我们认为存在非阴性神经网络的固定点,即神经网络,作为输入的非阴性矢量,并使用非阴性参数进行处理。我们首先表明,非阴性神经网络在非线性 Perron-Frobenius 理论的框架内可以被承认为单体和(微)可扩缩功能。这一事实使我们能够为非阴性神经网络固定点的存在提供条件,这些条件比最近利用 convex 分析中的论点而获得的条件要弱。此外,我们证明非阴性神经网络固定点的形状往往是一个间距,这种间距会退化到可伸缩网络的情况。本文的结果有助于理解自动电解分子的行为,因为固定点是能够完美重建的非阴性神经网络的一组点。此外,这些条件提供了对使用循环解动技术设计的神经网络的洞察力,这可以被视为第一个固定点,搜索非阴性神经性神经网络的形状,可以被看成一个间隔点,用来查找可伸缩性网络的情况。本文的结果有助于理解自动解算者的行为,因为一个主要的理论性模型是,我们从一个硬质的硬质分析系统中可以核查一个硬质的硬质的硬质模型。