Besides classical feed-forward neural networks such as multilayer perceptrons, also neural ordinary differential equations (neural ODEs) have gained particular interest in recent years. Neural ODEs can be interpreted as an infinite depth limit of feed-forward or residual neural networks. We study the input-output dynamics of finite and infinite depth neural networks with scalar output. In the finite depth case, the input is a state associated with a finite number of nodes, which maps under multiple non-linear transformations to the state of one output node. In analogy, a neural ODE maps an affine linear transformation of the input to an affine linear transformation of its time-$T$ map. We show that, depending on the specific structure of the network, the input-output map has different properties regarding the existence and regularity of critical points. These properties can be characterized via Morse functions, which are scalar functions where every critical point is non-degenerate. We prove that critical points cannot exist if the dimension of the hidden layer is monotonically decreasing or the dimension of the phase space is smaller than or equal to the input dimension. In the case that critical points exist, we classify their regularity depending on the specific architecture of the network. We show that except for a Lebesgue measure zero set in the weight space, each critical point is non-degenerate if for finite depth neural networks the underlying graph has no bottleneck, and if for neural ODEs, the affine linear transformations used have full rank. For each type of architecture, the proven properties are comparable in the finite and infinite depth cases. The established theorems allow us to formulate results on universal embedding and universal approximation, i.e., on the exact and approximate representation of maps by neural networks and neural ODEs.
翻译:暂无翻译