In this work, we show that neural networks can be represented via the mathematical theory of quiver representations. More specifically, we prove that a neural network is a quiver representation with activation functions, a mathematical object that we represent using a network quiver. Also, we show that network quivers gently adapt to common neural network concepts such as fully-connected layers, convolution operations, residual connections, batch normalization, pooling operations and even randomly wired neural networks. We show that this mathematical representation is by no means an approximation of what neural networks are as it exactly matches reality. This interpretation is algebraic and can be studied with algebraic methods. We also provide a quiver representation model to understand how a neural network creates representations from the data. We show that a neural network saves the data as quiver representations, and maps it to a geometrical space called the moduli space, which is given in terms of the underlying oriented graph of the network, i.e., its quiver. This results as a consequence of our defined objects and of understanding how the neural network computes a prediction in a combinatorial and algebraic way. Overall, representing neural networks through the quiver representation theory leads to 9 consequences and 4 inquiries for future research that we believe are of great interest to better understand what neural networks are and how they work.
翻译:在这项工作中,我们显示神经网络可以通过交错表达的数学理论来代表神经网络。 更具体地说, 我们证明神经网络是一个带有激活功能的交替表达器, 一个我们所代表的数学对象。 我们还显示, 网络会轻轻地适应共同神经网络概念, 比如完全连接层、 革命操作、 剩余连接、 批量正常化、 集合操作, 甚至随机线状神经网络。 我们显示, 这个数学表达器绝不是接近什么神经网络与现实完全吻合的近。 这种解释是代数网络, 并且可以用代数方法来研究。 我们还提供一个交错代表模型来理解神经网络如何从数据中产生代表代表。 我们显示, 一个神经网络保存数据作为交错表达器, 并将其绘制到一个几何空间, 也就是从网络的直径图中给出的调空间, 也就是说, 它的直线网络与现实完全吻合。 这是我们定义的物体的结果, 并且可以用代数数法网络如何更好地理解神经网络的配置和整个理论 导致我们从结构中 4 的预测, 我们如何更清楚地理解了我们是如何理解一个理论的理论的理论 。