A general lack of understanding pertaining to deep feedforward neural networks (DNNs) can be attributed partly to a lack of tools with which to analyze the composition of non-linear functions, and partly to a lack of mathematical models applicable to the diversity of DNN architectures. In this paper, we made a number of basic assumptions pertaining to activation functions, non-linear transformations, and DNN architectures in order to use the un-rectifying method to analyze DNNs via directed acyclic graphs (DAGs). DNNs that satisfy these assumptions are referred to as general DNNs. Our construction of an analytic graph was based on an axiomatic method in which DAGs are built from the bottom-up through the application of atomic operations to basic elements in accordance with regulatory rules. This approach allows us to derive the properties of general DNNs via mathematical induction. We show that using the proposed approach, some properties hold true for general DNNs can be derived. This analysis advances our understanding of network functions and could promote further theoretical insights if the host of analytical tools for graphs can be leveraged.
翻译:与深度进料神经网络(DNN)有关的普遍缺乏了解可部分归因于缺乏分析非线性功能构成的工具,部分归因于缺乏适用于DNN结构多样性的数学模型。在本文件中,我们就激活功能、非线性变换和DNN结构做出了一些基本假设,以便使用未经验证的方法,通过定向自行车图(DAGs)来分析DNS。满足这些假设的DNS被称为通用DNS。我们构建的解析图所依据的是一种不言而喻的方法,即DAGs从自下而上,通过根据监管规则将原子操作应用于基本元素。这一方法使我们能够通过数学感应来得出普通DNNS的属性。我们表明,使用拟议的方法,某些属性对普通DNPs来说是真实的。这一分析增进了我们对网络功能的理解,如果能够利用图表的分析工具主机群,则可以促进进一步的理论洞察力。